Compare commits

...

1188 Commits

Author SHA1 Message Date
Richard Beales
cb6214e647 Merge branch 'master' into summary_memory 2023-04-30 10:03:05 +01:00
merwanehamadi
dd96d98fa1 Feature/test summarization against memory challenge (#3567)
Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
2023-04-30 09:56:57 +01:00
Luke K
064ac5c742 Refactor AIConfig to Sanitize Input for Goal Parameters (#3492)
* Update remove_color_codes to handle non-string input

The `remove_color_codes` function now accepts any type of input that can be cast to a string. Previously, it was only accepting string input and not casting non-string types to string which was causing errors in some cases.

The changes were made to both logs.py and its corresponding test file.

* Refactor AIConfig to Sanitize Input for Goal Parameters

Details:
- Modified `ai_config.py` to correctly handle and sanitize user input for AI goals and convert them to formatted strings, to fix an issue where some specially formatted ai_settings.yaml files were causing goals to load as list[dict]
- `test_ai_config.py` includes a test for the `sanitize_input` function in `AIConfig` class.
- Removed unnecessary tests from `test_logs.py`

* Update for readabiity

* Update for readabiity

* Updates for conciceness

* Updated tests to confirm AIConfig saves goals as strings

* FIxed trailing space at end of line

---------

Co-authored-by: Luke Kyohere <lkyohere@mfsafrica.com>
Co-authored-by: James Collins <collijk@uw.edu>
2023-04-29 22:37:41 -07:00
Toran Bruce Richards
8b82421b9c Run Black and Isort 2023-04-30 17:17:18 +12:00
Toran Bruce Richards
75cc71f8d3 Tweak memory summarisation prompt 2023-04-30 16:44:23 +12:00
Toran Bruce Richards
f287282e8c fix broken partial commit. 2023-04-30 16:43:49 +12:00
Toran Bruce Richards
2a93aff512 Remove thoughts from memory summarisation. 2023-04-30 16:42:57 +12:00
Richard Beales
06ae4684c8 replace 50+ occurrences of print() with logger (#3056)
Co-authored-by: James Collins <collijk@uw.edu>
Co-authored-by: Luke Kyohere <lkyohere@mfsafrica.com>
Co-authored-by: k-boikov <64261260+k-boikov@users.noreply.github.com>
Co-authored-by: Media <12145726+rihp@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
2023-04-29 23:40:57 -05:00
Toran Bruce Richards
6d1653b84f Change "system" role to "Your Computer". 2023-04-30 15:55:53 +12:00
Toran Bruce Richards
a7816b8c79 Merge branch 'summary_memory' of https://github.com/torantulino/auto-gpt into summary_memory 2023-04-30 14:54:34 +12:00
Toran Bruce Richards
21913c4733 removes current memory global 2023-04-30 14:52:59 +12:00
Toran Bruce Richards
9d9c66d50f Adds check for empty full_message_history 2023-04-30 14:43:31 +12:00
Toran Bruce Richards
a00a7a2bd0 Fix. Update last_memory_index 2023-04-30 14:27:31 +12:00
Toran Bruce Richards
d6cb10432b Provide default new_events value when empty. 2023-04-30 14:26:36 +12:00
Toran Bruce Richards
0bea5e38a4 Replace "assistant" role with "you" when sumbitting to memory agent. 2023-04-30 14:26:09 +12:00
Toran Bruce Richards
88b2d5fb2d Remove global pre_index from summary_memory. 2023-04-30 14:25:06 +12:00
merwanehamadi
6997bb0bdd memory challenge B (#3550)
Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
2023-04-30 01:44:21 +01:00
merwanehamadi
cdd91f7ea3 Feature/challenge memory management (#3425)
Co-authored-by: JS <38794445+jonathansheets517@users.noreply.github.com>
Co-authored-by: Richard Beales <rich@richbeales.net>
2023-04-29 21:09:58 +01:00
Media
4f72ee7815 Refactor test_spiunner to deprecate unittest in favor of pytest (#3532)
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
2023-04-29 12:40:32 -05:00
Media
095883ca54 Removing duplicate tests browse_tests (#3535)
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
2023-04-29 12:16:16 -05:00
Ikko Eltociear Ashimine
f77c3604ce fix typo in testing.md (#3537)
Runing -> Running
2023-04-29 12:05:43 -05:00
k-boikov
2d058feaf8 Extend & improve file operations tests (#3404)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-29 16:55:47 +02:00
Steven Baumann
9c6494aca7 Fix clone_repository to conform to URL validation (#3150)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-29 14:57:48 +02:00
Toran Bruce Richards
f1032926cc Update autogpt/memory_management/summary_memory.py 2023-04-30 00:19:35 +12:00
Toran Bruce Richards
e7ad51ce42 Update autogpt/memory_management/summary_memory.py 2023-04-30 00:19:29 +12:00
Toran Bruce Richards
a3522223d9 Run black formatter 2023-04-29 23:27:03 +12:00
Toran Bruce Richards
4e3035efe4 Integrate summary memory with autogpt system 2023-04-29 23:26:14 +12:00
Toran Bruce Richards
a8cbf51489 Run isort. 2023-04-29 23:22:31 +12:00
Toran Bruce Richards
317361da8c Black formatting 2023-04-29 23:22:08 +12:00
Toran Bruce Richards
991bc77e0b Add complete typing and docstrings 2023-04-29 23:21:21 +12:00
Toran Bruce Richards
83357f6c2f Remove test prints 2023-04-29 23:13:48 +12:00
Toran Bruce Richards
acf48d2d4d Add running summary memory functions. 2023-04-29 23:10:32 +12:00
James Collins
b8478a96ae Feature/llm data structs (#3486)
* Organize all the llm stuff into a subpackage

* Add structs for interacting with llms
2023-04-28 15:04:31 -07:00
Deso
c7d75643d3 Architecture-agnostic dev-container patch, now with Redis 😍 (#3102)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com>
2023-04-28 23:39:52 +02:00
BillSchumacher
cfc7817869 update pyproject (#2757)
* update pyproject

* python bump

---------

Co-authored-by: Richard Beales <rich@richbeales.net>
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: James Collins <collijk@uw.edu>
2023-04-28 14:25:41 -07:00
James Collins
92009ceb32 More graceful browsing error handling (#3494) 2023-04-28 22:12:47 +01:00
merwanehamadi
aa3e37ac14 Fix memory by adding it only when context window full (#3469)
* Fix memory by adding it only when context window full

* clean json utils
2023-04-28 21:07:49 +01:00
James Collins
3b74d2150e Organize all the llm stuff into a subpackage (#3436) 2023-04-28 12:00:54 -07:00
Media
ee4043ae19 Refactor test_chat to use pytest instead of unittest (#3484)
* refactor_for_pytest

* formatting

---------

Co-authored-by: James Collins <collijk@uw.edu>
2023-04-28 11:27:52 -07:00
k-boikov
c1f1da27e7 move remove_color_codes to utils and add tests (#3260)
* move remove_color_codes to utils and add tests

* Fix for ai_settings goals loaded as list(dict)

Some ai_settings formats can cause goals to load as list(dict)
not list(str)

Refactor code in utils.py to explicitly convert input type to string in
remove_color_codes() function.

- Updated remove_color_codes function to convert input argument
 to string type explicitly to avoid unexpected type errors.
- Test case added to check conversion of dict to string in
remove_color_codes function.

* Update tests/test_utils.py

Co-authored-by: James Collins <collijk@uw.edu>

* move remove_color_codes fn and tests to proper files

---------

Co-authored-by: Luke Kyohere <lkyohere@mfsafrica.com>
Co-authored-by: James Collins <collijk@uw.edu>
2023-04-28 11:13:30 -07:00
Media
aebe891489 Remove unittest in favor of pytest in the test_token_counter module (#3453)
* init remove unittest for pytest

* docstrings

* black

---------

Co-authored-by: James Collins <collijk@uw.edu>
2023-04-28 09:48:30 -07:00
Media
cf5fdabdfc Removing unitest in favor of pytest from test_config.py (#3417)
* removing unitest in favor of pytest

* remove singleton test and unnecessary fixture

---------

Co-authored-by: James Collins <collijk@uw.edu>
2023-04-28 09:32:11 -07:00
rickythefox
20ef130341 Add tests for code/shell execution & improve config fixture (#1268)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-28 14:51:29 +02:00
Johnny C
1772a01d04 Fix URL to docs in API throttling message (#3201)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-27 23:43:56 +02:00
Eddie Cohen
5ce6da95fc Make y/n configurable (#3178)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-27 21:26:47 +02:00
James Collins
94dc6f19aa Add a regression test for the embedding (#3422) 2023-04-27 11:48:18 -07:00
Dhruv Awasthi
427b8648ee Fix README: remove redundant "Disclaimer" (#3391)
Co-authored-by: Richard Beales <rich@richbeales.net>
2023-04-27 19:24:28 +01:00
Iliass
4b54e3c6d8 Update broken link (#3416)
Co-authored-by: Richard Beales <rich@richbeales.net>
2023-04-27 19:18:44 +01:00
Irmius
6b4ad1f933 Fix browse_website headless mode for Firefox (#2816)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-27 19:32:31 +02:00
Reinier van der Leer
3d89ed1787 Fix imports, type hints and fixtures for goal oriented tests (#3415) 2023-04-27 19:16:56 +02:00
merwanehamadi
adbb47fb65 scrape text regression test (#3387)
Co-authored-by: James Collins <collijk@uw.edu>
2023-04-27 09:27:15 -07:00
Montana Flynn
7cd76b8d8e Add makedirs to file operations (#3289)
* Add makedirs to file operations

* Add new directory tests for file operations

* Fix wrong setUp test error

* Simplify makedirs and use correct nested path

* Fix linter error

---------

Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: James Collins <collijk@uw.edu>
2023-04-27 09:12:24 -07:00
Reinier van der Leer
9e17a304de Minor improvements to the docs for voice config and testing (#3407) 2023-04-27 08:58:35 -07:00
Reinier van der Leer
7a161cc0bd Add .gitattributes (#3402) 2023-04-27 06:28:18 -07:00
BillSchumacher
d8c16de123 The unlooping and fixing of file execution. (#3368)
* The unlooping and fixing of file execution.

* lint

* Use static random seed during testing. remove unused import.

* Fix bug

* Actually fix bug.

* lint

* Unloop a bit more an fix json.

* Fix another bug.

* lint.

---------

Co-authored-by: merwanehamadi <merwanehamadi@gmail.com>
2023-04-26 21:07:28 -07:00
chyezh
65b6c2706e fix connection bug for zilliz uri on milvus (#3278)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
Co-authored-by: merwanehamadi <merwanehamadi@gmail.com>
2023-04-26 18:57:29 -07:00
Robin Richtsfeld
76bd192f82 Set vcr_config scope to "session" (#3361)
Co-authored-by: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com>
2023-04-26 20:55:01 -05:00
merwanehamadi
02f546d2bc Run the integration tests in the CI pipeline BUT without API keys (#3359)
* integration tests in ci pipeline

* Update CONTRIBUTING.md

Co-authored-by: Reinier van der Leer <github@pwuts.nl>

---------

Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-26 20:45:03 -05:00
Eddie Cohen
3b56716a68 Hotfix/validate url strips query params (#3370)
* reconstruct url in sanitize

* tests for url validation

---------

Co-authored-by: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com>
2023-04-26 20:20:15 -05:00
merwanehamadi
a3195d84d3 remove do nothing (#3369) 2023-04-26 19:55:02 -05:00
Reinier van der Leer
bfaf36099e Fix(workspace) root resolution (#3365) 2023-04-26 16:43:21 -07:00
merwanehamadi
7a006afb17 fix cassettes recording (#3342) 2023-04-26 13:11:08 -07:00
WladBlank
cd8fdb31ef Chat plugin capability (#2929)
Co-authored-by: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com>
2023-04-26 15:08:39 -05:00
karlivory
a0cfdb0830 fix set_total_budget docstring (#3288) 2023-04-26 12:18:12 -07:00
James Collins
83f11465f5 Clean up image generation tests (#3338) 2023-04-26 12:07:28 -07:00
Reinier van der Leer
76df14b831 Fix docs (#3336)
* Fix docs

* Add short section about testing to contribution guide

* Add back note for voice configuration

* Remove LICENSE symlink from docs/

* Fix site_url in mkdocs.yml
2023-04-26 19:14:14 +01:00
merwanehamadi
109fa04c7c test image gen (#3287) 2023-04-26 10:23:05 -07:00
merwanehamadi
a6355a6bc8 use pytest-recording with VCR (#3283) 2023-04-26 09:57:05 -07:00
James Collins
0ff471a49a Have api manager use singleton pattern (#3269)
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
2023-04-26 11:37:49 -05:00
merwanehamadi
4241fbbbf0 mock openai in test image gen (#3285) 2023-04-26 09:11:31 -07:00
✔️ITtechtor
3ae6c1b03f Update installation.md (#3325) 2023-04-26 08:50:43 -07:00
vlad
1e71f952f9 Codecov - don't fail pipelines for project cov changes (#3327)
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
2023-04-26 09:54:22 -05:00
apurvsibal
749b1bbfc0 Fix(docs) requirements link in installation guide (#3264) 2023-04-26 13:59:53 +02:00
Reinier van der Leer
265a23212e Fix(docs) Contributing, CoC and License links (#3308) 2023-04-26 11:40:37 +01:00
Richard Beales
f0f34030a0 Fix docs alignment (#3302) 2023-04-26 02:52:33 -05:00
Robin Richtsfeld
d75379358f Fix get_ada_embedding return type (#3263) 2023-04-25 16:52:38 -07:00
Nicholas Tindle
8670b3039e Fix PR size autolabeler message (#3194) 2023-04-26 00:25:38 +02:00
James Collins
eec86a7b82 Load .env in package init (#3251) 2023-04-25 14:53:13 -07:00
Peter Svensson
fac8f7da21 adding probably erroneously removed return value from execut_shell, giving 'None' in return always otherise - not ideal (#3212)
Co-authored-by: James Collins <collijk@uw.edu>
2023-04-25 13:32:39 -07:00
James Collins
6fbac455d4 Remove import time loading of config from llm_utils (#3245) 2023-04-25 12:10:12 -07:00
Richard Beales
1806fc683d Fix readme centering (#3243) 2023-04-25 19:50:22 +01:00
James Collins
f962939737 Use explicit API keys when querying openai rather than import time manipulation of the package attributes (#3241) 2023-04-25 11:38:06 -07:00
James Collins
2619740daa Extract OpenAI API retry handler and unify ADA embeddings calls. (#3191)
* Extract retry logic, unify embedding functions

* Add some docstrings

* Remove embedding creation from API manager

* Add test suite for retry handler

* Make api manager fixture

* Fix typing

* Streamline tests
2023-04-25 11:12:24 -07:00
Reinier van der Leer
940b115f0a remove plugin notice from CONTRIBUTING.md (#3227) 2023-04-25 10:05:58 -07:00
merwanehamadi
58d84787f3 Test Agent.create_agent_feedback (#3209) 2023-04-25 08:41:57 -07:00
Peter Petermann
6fc6ea69d2 this changes it so the file from config is used, rather than a hardcoded name that might not exist (#3189) 2023-04-25 07:56:59 +01:00
Toran Bruce Richards
93bbd13a34 Update README.md 2023-04-25 17:36:41 +12:00
AbTrax
ae31dd4bb1 Feature: Added Self Feedback (#3013)
* Feature: Added Self Feedback

* minor fix: complied to flake8

* Add: Self Feedback To Usage.md

* Add: role/goal allignment

* Added: warning to usage.md

* fix: Formatted with black

---------

Co-authored-by: Richard Beales <rich@richbeales.net>
2023-04-25 06:28:06 +01:00
Toran Bruce Richards
411a13a0d4 Update README.md 2023-04-25 17:27:29 +12:00
Nicholas Tindle
eb0e96715e docs fix to image generation (#3186) 2023-04-25 06:03:31 +01:00
James Collins
7e5afd8744 Refactor/decouple logger from global configuration (#3171)
* Decouple logging from the global configuration

* Configure logging first

* Clean up global voice engine creation

* Remove class vars from logger

* Remove duplicate implementation of

---------

Co-authored-by: Richard Beales <rich@richbeales.net>
2023-04-25 05:41:30 +01:00
✔️ITtechtor
960eb4f367 Update installation.md (#3166) 2023-04-25 05:36:03 +01:00
Duong HD
956d9fdcd6 Add a little more descriptive installation instruction (#3180)
* add Dev Container installation instruction to installation.md

* add Dev Container installation instruction to installation.md

* Update installation.md

---------

Co-authored-by: Richard Beales <rich@richbeales.net>
2023-04-25 05:34:59 +01:00
Lawrence Neal
140fd6f3bf Ensure Fore.RED is followed by Fore.RESET (#3182)
This properly resets the terminal, ensuring that the red text is red and
the normal text remains unaffected.

Co-authored-by: Richard Beales <rich@richbeales.net>
2023-04-25 05:32:59 +01:00
Deso
3d47b47901 Update bulletin to warn about deprication (#3181) 2023-04-25 05:28:46 +01:00
Nicholas Tindle
c7f4734826 Update ci.yml (#3179) 2023-04-25 03:53:06 +01:00
Daniel Chen
45f9b570a2 Re-add install-plugin-deps to CLI (#3170) 2023-04-24 20:11:19 -05:00
Daniel Chen
29284a5460 Add option to install plugin dependencies (#3068)
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
2023-04-24 17:42:10 -05:00
James Collins
dfcbf6eee6 Refactor/move singleton out of config module (#3161) 2023-04-24 17:24:57 -05:00
James Collins
83b91a31bc Remove dead permanent memory module (#3145)
* Remove dead permanent memory module

* Delete sqlite db that snuck in
2023-04-24 21:48:37 +01:00
James Collins
b984f985bc Hotfix/global agent manager workaround (#3157)
* Add indirection layer to entry point

* Get around singleton pattern for AgentManager to fix tests
2023-04-24 21:27:31 +01:00
Lei Zhang
a5cc67badd anontation fix (#3018)
* anontation fix

* fix param name and type

---------

Co-authored-by: Richard Beales <rich@richbeales.net>
2023-04-24 21:08:02 +01:00
James Collins
8bf4eb7e90 Merge pull request #3152 from collijk/refactor/add-indirection-layer-around-entry-point
Add indirection layer to entry point
2023-04-24 12:32:59 -07:00
Nicholas Tindle
128d83a0c8 Merge branch 'master' into refactor/add-indirection-layer-around-entry-point 2023-04-24 14:28:56 -05:00
Media
5de1025520 Agent and agent manager tests (#3116)
* Update Python version and benchmark file in benchmark.yml

* Refactor main function and imports in cli.py

* Update import statement in ai_config.py

* Add set_temperature and set_memory_backend methods in config.py

* Remove unused import in prompt.py

* Add goal oriented tasks workflow

* Added agent_utils to create agent

* added pytest and vcrpy

* added write file cassette

* created goal oriented task write file with cassettes to not pay openai tokens

* solve conflicts

* add ability set azure because github workflow needs it off

* solve conflicts in cli.py

* black because linter fails

* solve conflict

* setup github action to v3

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* fix conflicts

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* Plugins: debug line always printed in plugin load

* add decorator to tests

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* move decorator higher up

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* init

* more tests

* passing tests

* skip gitbranch decorator on ci

* decorator skiponci

* black

* Update tests/utils.py decorator of skipping ci

Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>

* black

* I oopsed the name

* black

* finally

* simple tests for agent and manager

* ísort

---------

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwane.hamadi@redica.com>
Co-authored-by: Richard Beales <rich@richbeales.net>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
2023-04-24 14:19:42 -05:00
James Collins
4a206168a7 Merge branch 'master' into refactor/add-indirection-layer-around-entry-point 2023-04-24 12:14:59 -07:00
James Collins
5f646498c4 Add indirection layer between cli and application start 2023-04-24 12:12:14 -07:00
James Collins
06e81b7dfd Merge pull request #3147 from collijk/bugfix/error-on-null-bytes-in-path-windows
Error if null bytes are included in the path on windows
2023-04-24 12:02:41 -07:00
Nicholas Tindle
97d2f417c7 Merge branch 'master' into bugfix/error-on-null-bytes-in-path-windows 2023-04-24 13:55:41 -05:00
YOUNESS ZEMZGUI
45f2513a73 Adjust test_json_parser file (#1935)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-24 19:54:46 +02:00
James Collins
1f58ca47b5 Merge branch 'master' into bugfix/error-on-null-bytes-in-path-windows 2023-04-24 10:47:46 -07:00
James Collins
17819e2a55 More robust null byte checking 2023-04-24 10:28:51 -07:00
Reinier van der Leer
ffdc652605 Clean up GitHub Workflows (#3059)
* initial cleanup of github workflows

* only run pr-label workflow on push to master

* move docker ci/release summaries to scripts

* add XS label for PR's under 2 lines

* draft test job for Docker CI

* fix & activate Docker CI test job

* add debug step to docker CI

* fix Docker CI test container env

* Docker CI build matrix

* fixup build summaries

* fix pipes in summary

* optimize Dockerfile for layer caching

* more markdown escaping

* add gha cache scopes

* add Docker CI cache clean workflow
2023-04-24 18:03:21 +01:00
k-boikov
3886afc825 fix test_search_files for windows (#3073)
Co-authored-by: Richard Beales <rich@richbeales.net>
2023-04-24 17:42:08 +01:00
fluxism
cade788a7e Add <reason> arg to do_nothing command (#3090)
* Add <reason> arg to do_nothing command

* do_nothing returns reason arg
2023-04-24 16:12:15 +01:00
Reinier van der Leer
9c60eecce6 Improve docker setup & config (#1843)
* Improve docker setup & config

* fix(browsing): Selenium needs access to home directory

* fix(docker): allow overriding memory backend settings

* simplify Dockerfile and docker-compose config

* add .dockerignore

* adjust Docker CI with release build type arg

* replace Chrome by Chromium in devcontainer

* update docs

* update bulletin

* use preinstalled chromedriver in web_selenium.py

* update installation.md

* fix code blocks for mkdocs

* fix links to docs
2023-04-24 14:27:53 +01:00
Andres Caicedo
f8dfedf1c6 Add function and class descriptions to tests (#2715)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-24 14:55:49 +02:00
Eddie Cohen
40a75c804c Validate URLs in web commands before execution (#2616)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-24 12:33:44 +02:00
Soheil Sam Yasrebi
794a164098 handle API timeouts (#3024) 2023-04-24 08:26:14 +01:00
scout9ll
89125376ba Fixed incorrect comment: Clear memory instead of Redis (#3092)
Co-authored-by: liaolin.qiu <liaolin.qiu@qingteng.cn>
2023-04-24 08:07:08 +01:00
Pi
efc17f21b9 Merge pull request #3089 from collijk/hotfix/config-sequencing-bug
Resolve sequencing issue in global state management
2023-04-24 04:53:51 +01:00
James Collins
7ddc44d48e Resolve sequencing issue in global state managemtn 2023-04-23 20:44:53 -07:00
James Collins
e8473d4920 Merge pull request #3066 from collijk/bugfix/make-local-memory-json-when-it-doesnt-exist
Bugfix/make local memory json when it doesnt exist
2023-04-23 17:49:36 -07:00
James Collins
91aa40e0df Remove another global memory access 2023-04-23 16:59:49 -07:00
James Collins
882a9086a8 Merge branch 'bugfix/make-local-memory-json-when-it-doesnt-exist' of github.com:collijk/Auto-GPT into bugfix/make-local-memory-json-when-it-doesnt-exist 2023-04-23 16:55:26 -07:00
James Collins
43fa67ca81 Remove unnecessary memory call 2023-04-23 16:54:32 -07:00
James Collins
715916a5ba Merge branch 'master' into bugfix/make-local-memory-json-when-it-doesnt-exist 2023-04-23 16:44:59 -07:00
James Collins
a28b8906a6 Add tests in pytest 2023-04-23 16:40:53 -07:00
James Collins
aedd288dbe Refactor/collect embeddings code (#3060)
* Collect all embedding code into a single module

* Collect all embedding code into a single module

* actually, llm_utils is a better place

* Oh, and remove the module now that we don't use it

---------

Co-authored-by: Nicholas Tindle <nick@ntindle.com>
2023-04-23 17:50:50 -05:00
James Collins
680c7b5aaa Make local json cache when it doesn't exist 2023-04-23 15:43:04 -07:00
Media
374f543bea Perm memory test cases (#2996)
* Update Python version and benchmark file in benchmark.yml

* Refactor main function and imports in cli.py

* Update import statement in ai_config.py

* Add set_temperature and set_memory_backend methods in config.py

* Remove unused import in prompt.py

* Add goal oriented tasks workflow

* Added agent_utils to create agent

* added pytest and vcrpy

* added write file cassette

* created goal oriented task write file with cassettes to not pay openai tokens

* solve conflicts

* add ability set azure because github workflow needs it off

* solve conflicts in cli.py

* black because linter fails

* solve conflict

* setup github action to v3

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* fix conflicts

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* Plugins: debug line always printed in plugin load

* add decorator to tests

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* move decorator higher up

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* init

* more tests

* passing tests

* skip gitbranch decorator on ci

* decorator skiponci

* black

* Update tests/utils.py decorator of skipping ci

Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>

* black

* I oopsed the name

* black

* finally

* perm memory tests

* perm memory tests

---------

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwane.hamadi@redica.com>
Co-authored-by: Richard Beales <rich@richbeales.net>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
2023-04-23 16:50:15 -05:00
coditamar
ec71075bfe Add tests for json_utils.json_fix_llm (#2952)
* config.py: make load_dotenv(override=True)

* Update Python version and benchmark file in benchmark.yml

* Refactor main function and imports in cli.py

* Update import statement in ai_config.py

* Add set_temperature and set_memory_backend methods in config.py

* Remove unused import in prompt.py

* Add goal oriented tasks workflow

* Added agent_utils to create agent

* added pytest and vcrpy

* added write file cassette

* created goal oriented task write file with cassettes to not pay openai tokens

* solve conflicts

* add ability set azure because github workflow needs it off

* solve conflicts in cli.py

* black because linter fails

* solve conflict

* setup github action to v3

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* fix conflicts

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* Plugins: debug line always printed in plugin load

* add test for fix_json_using_multiple_techniques

* style

* style

* mocking try_ai_fix to avoid call_ai_function

* black style

* mock try_ai_fix to avoid calling the AI model

* removed mock, as we can add @requires_api_key("OPEN_API_KEY")

* style

* reverse merge conflict related files and changes

* bring back the mock for try_ai_fix

---------

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwane.hamadi@redica.com>
Co-authored-by: Richard Beales <rich@richbeales.net>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com>
2023-04-23 16:29:40 -05:00
Vwing
d6ef9d1b5d Make Auto-GPT aware of its running cost (#762)
* Implemented running cost counter for chat completions

This data is known to the AI as additional system context, and is printed out to the user

* Added comments to api_manager.py

* Added user-defined API budget.

The user is now prompted if they want to give the AI a budget for API calls. If they enter nothing, there is no monetary limit, but if they define a budget then the AI will be told to shut down gracefully once it has come within 1 cent of its limit, and to shut down immediately once it has exceeded its limit. If a budget is defined, Auto-GPT is always aware of how much it was given and how much remains to be spent.

* Chat completion calls are now done through api_manager. Total running cost is printed.

* Implemented api budget setting and tracking

User can now configure a maximum api budget, and the AI is aware of that and its remaining budget. The AI is instructed to shut down when exceeding the budget.

* Update autogpt/api_manager.py

Change "per token" to "per 1000 tokens" in a comment on the api cost

Co-authored-by: Rob Luke <code@robertluke.net>

* Fixed lint errors

* Include embedding costs

* Add embedding completion cost

* lint

* Added 'requires_api_key' decorator to test_commands.py, switched to a valid chat completions model

* Refactor API manager, add debug mode, and add tests

- Extract model costs to  to avoid duplication
- Add debug mode parameter to ApiManager class
- Move debug mode configuration to
- Log AI response and budget messages in debug mode
- Implement 'test_api_manager.py'

* Fixed test_setup failing. An extra user input is needed for api budget

* Linting

---------

Co-authored-by: Rob Luke <code@robertluke.net>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
2023-04-23 16:04:31 -05:00
hdkiller
bf895eb656 fix typo in warning message (#3044) 2023-04-23 21:28:48 +01:00
James Collins
dcd6aa912b Add workspace abstraction (#2982)
* Add workspace abstraction

* Remove old workspace implementation

* Extract path resolution to a helper function

* Add api key requirements to new tests
2023-04-23 14:36:04 -05:00
chyezh
da48f9c972 Fix Milvus module config import (#3036)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-23 18:32:17 +02:00
chyezh
cac1ea27e2 Support secure and authenticated Milvus memory backends (#2127)
Co-authored-by: Reinier van der Leer (Pwuts) <github@pwuts.nl>
2023-04-23 18:11:04 +02:00
Pi
6e588bb2ed Merge pull request #3030 from Significant-Gravitas/richbeales-patch-1
update documentation deploy gh action
2023-04-23 16:49:32 +01:00
Richard Beales
1c352f5ff0 update documentation deploy gh action 2023-04-23 16:42:12 +01:00
Didier Durand
582c85b140 Documentation: ensuring naming consistency (#2975)
auto gpt -> Auto-GPT to ensure naming consistency on the page
2023-04-23 09:28:08 +01:00
Didier Durand
a38646409f Documentation: fixing typos (#2978)
Fixing a couple of typos
2023-04-23 09:26:54 +01:00
non-adjective
4906e3d7ef update weaviate.py for weaviate compatibility (#2985)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/weaviate/schema/crud_schema.py", line 708, in _create_class_with_primitives
    raise UnexpectedStatusCodeException("Create class", response)
weaviate.exceptions.UnexpectedStatusCodeException: Create class! Unexpected status code: 422, with response body: {'error': [{'message': "'Auto-gpt' is not a valid class name"}]}.

GPT4:

The error message indicates that "Auto-gpt" is not a valid class name. In Weaviate, class names must start with a capital letter and can contain only alphanumeric characters.

Took advice and code and applying to weaviate.py to great result, programs runs now with no error!

Unable to reproduce easily. Might be related to switching memory between Local and Weaviate? Either way, the proposed solution works for MacOS using Docker + Weaviate.
2023-04-23 09:17:42 +01:00
Richard Beales
9ed2a7a2d2 Add missing test decorator (#2989)
* Mark test test_generate_aiconfig_automatic_typical  as @requires_api_key("OPENAI_API_KEY")

* missing import

* add missing decorator
2023-04-23 09:17:18 +01:00
Richard Beales
eaa6ed85e1 Fix to prompt generator - "Ensure the response can beparsed" (#2980) 2023-04-23 09:07:14 +01:00
Nicholas Tindle
0b08b4f1c5 Update installation.md (#2970) 2023-04-23 07:39:13 +01:00
Richard Beales
bb786461c7 Mark test test_generate_aiconfig_automatic_typical as @requires_api_… (#2981)
* Mark test test_generate_aiconfig_automatic_typical  as @requires_api_key("OPENAI_API_KEY")

* missing import
2023-04-23 07:35:17 +01:00
Didier Durand
bc354a3df6 Documentation typo: serach -> search (#2977) 2023-04-23 07:23:48 +01:00
Toran Bruce Richards
f462674e32 Automatic prompting (#2896)
* Add automatic ai prompting

* Tweak the default prompt.

* Print agent info upon creation.

* Improve system prompt

* Switch to fast_llm_model by default

* Add format output command to user prompt.

This vastly improves formatting success rate.

* Add fallback to manual mode if llm output cannot be parsed (or other error).

* Add unit test to cover ai creation setup.

* Replace redundent prompt with manual mode instructions.

* Add missing docstrings and typing.

* Runs black on changes.

* Runs isort

* Update Python version and benchmark file in benchmark.yml

* Refactor main function and imports in cli.py

* Update import statement in ai_config.py

* Add set_temperature and set_memory_backend methods in config.py

* Remove unused import in prompt.py

* Add goal oriented tasks workflow

* Added agent_utils to create agent

* added pytest and vcrpy

* added write file cassette

* created goal oriented task write file with cassettes to not pay openai tokens

* solve conflicts

* add ability set azure because github workflow needs it off

* solve conflicts in cli.py

* black because linter fails

* solve conflict

* setup github action to v3

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* fix conflicts

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* Plugins: debug line always printed in plugin load

* add decorator to tests

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* move decorator higher up

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* merge

---------

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwane.hamadi@redica.com>
Co-authored-by: Richard Beales <rich@richbeales.net>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com>
2023-04-23 06:36:10 +01:00
Media
2b5852f7da Tests utils suite (#2961)
* Update Python version and benchmark file in benchmark.yml

* Refactor main function and imports in cli.py

* Update import statement in ai_config.py

* Add set_temperature and set_memory_backend methods in config.py

* Remove unused import in prompt.py

* Add goal oriented tasks workflow

* Added agent_utils to create agent

* added pytest and vcrpy

* added write file cassette

* created goal oriented task write file with cassettes to not pay openai tokens

* solve conflicts

* add ability set azure because github workflow needs it off

* solve conflicts in cli.py

* black because linter fails

* solve conflict

* setup github action to v3

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* fix conflicts

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* Plugins: debug line always printed in plugin load

* add decorator to tests

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* move decorator higher up

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>

* init

* more tests

* passing tests

* skip gitbranch decorator on ci

* decorator skiponci

* black

* Update tests/utils.py decorator of skipping ci

Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>

* black

* I oopsed the name

* black

* finally

* black

* wrong file

---------

Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwanehamadi@gmail.com>
Co-authored-by: Merwane Hamadi <merwane.hamadi@redica.com>
Co-authored-by: Richard Beales <rich@richbeales.net>
Co-authored-by: Nicholas Tindle <nick@ntindle.com>
Co-authored-by: BillSchumacher <34168009+BillSchumacher@users.noreply.github.com>
Co-authored-by: Nicholas Tindle <nicktindle@outlook.com>
2023-04-22 19:07:28 -05:00
Nicholas Tindle
986bdaab36 Merge pull request #2946 from merwanehamadi/feature/add-decorator-to-tests
add decorator to tests so it's skipped if api key required but not present in the environment
2023-04-23 00:45:55 +02:00
BillSchumacher
d3e4ec14a6 Merge pull request #2936 from Significant-Gravitas/richbeales-patch-1
Plugins: debug line always printed in plugin load
2023-04-23 00:45:54 +02:00
Merwane Hamadi
b7cd56f72b move decorator higher up
Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
2023-04-23 00:45:54 +02:00
Richard Beales
78a6b44b21 Plugins: debug line always printed in plugin load 2023-04-23 00:45:53 +02:00
Merwane Hamadi
eb5a8a87d8 add decorator to tests
Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
2023-04-23 00:45:53 +02:00
Nicholas Tindle
0410331ecd Merge pull request #2931 from riensen/fix/multiple-plugins
Enable support for loading multiple plugins per zip file
2023-04-23 00:45:50 +02:00
Merwane Hamadi
996a3b331a Add CI smoke test (#2461) 2023-04-23 00:23:45 +02:00
riensen
8173e4d139 Fix: Mulitple plugins per zip for Auto-GPT-Plugins 2023-04-22 18:31:04 +02:00
Richard Beales
5a95ead608 Merge pull request #2521 from jazelly/fix-benchmark-typo
misc: fix typo in benchmark
2023-04-22 17:06:10 +01:00
Richard Beales
f04755be30 Merge pull request #2631 from BillSchumacher/fix-command-arg-ordering
Fix plugin command arg ordering issue.
2023-04-22 17:02:33 +01:00
Richard Beales
ea26988a95 run black and isort on behalf of OP 2023-04-22 16:58:21 +01:00
Richard Beales
f9f540738c Merge pull request #2708 from ugobok/patch-1
Replace print statements with logging.error
2023-04-22 16:30:12 +01:00
Richard Beales
894027f5f6 run black and isort on behalf of OP 2023-04-22 16:25:03 +01:00
Richard Beales
8e8a5a1522 Merge pull request #2915 from dharana77/master
ci: selenium safari bug fixed
2023-04-22 15:24:54 +01:00
lee
1ffa9b2ebe ci: selenium safari bug fixed
ModuleNotFoundError: No module named 'selenium.webdriver.safari.options when install <=4.1.3
2023-04-22 22:00:23 +09:00
Richard Beales
ad5d8b2341 Re-work Docs and split out README (using MkDocs) (#2894)
* Initial Documentation re-org

* remove testing link from readme

* rewrite quickstart

* get code blocks working across mkdocs and github

* add link to plugins repo

* add link to plugins repo and move readme to plugin template repo

* Add emoji to "Extensibility with Plugins" in readme

Co-authored-by: Reinier van der Leer <github@pwuts.nl>

* Make docs deploy workflow path-selective

* Also run workflow when the workflow is updated

* fix readme links under configuration subfolder

* shrink subheadings in readme

---------

Co-authored-by: Toran Bruce Richards <toran.richards@gmail.com>
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-22 12:56:22 +01:00
Nicholas Tindle
e9f3f9bd1d Add CodeCov CI coverage requirements (#2881) 2023-04-22 13:04:39 +02:00
Toran Bruce Richards
e39cd1bf57 Fix(tests): restore config values after changing them in tests (#2904) 2023-04-22 12:14:18 +02:00
Richard Beales
0efbe23d89 Merge pull request #2756 from gklab/master
adjust file_operations.py code format
2023-04-22 10:24:54 +01:00
Nicholas Tindle
b4bd11d708 Merge pull request #2888 from didier-durand/patch-2
Fixing header of CONTRIBUTING.md
2023-04-22 02:46:04 -05:00
Didier Durand
fe0baf233d Fixing link 2023-04-22 09:39:18 +02:00
Nicholas Tindle
e9e1f04818 Merge pull request #2884 from minghinmatthewlam/update-speech-readme
Update README to include Eleven Labs speech setup
2023-04-22 02:33:47 -05:00
Nicholas Tindle
602b6e9901 Merge pull request #2886 from didier-durand/patch-1
Documentation: fixing typos in README.md
2023-04-22 02:32:58 -05:00
Didier Durand
1b043305c1 Fixing header of CONTRIBUTING.md
ProjectName -> Auto-GPT
2023-04-22 09:32:35 +02:00
Didier Durand
ba87cb0867 Fixing typos in README.md
Fixing some typos in README
2023-04-22 09:14:52 +02:00
Matthew Lam
798d2d6978 update readme for speech 2023-04-21 23:51:05 -07:00
Nicholas Tindle
fc4b5ad1d2 Merge pull request #2855 from OmriGM/tests/basic-spinner-tests
Added basic spinner tests and modified spinner method docstring
2023-04-22 01:27:33 -05:00
Omri Grossman
e09bbc43d4 Merge branch 'master' into tests/basic-spinner-tests 2023-04-22 09:24:25 +03:00
Richard Beales
ca31c4699a Merge pull request #2877 from Significant-Gravitas/codecov
Add Code Cov
2023-04-22 07:05:44 +01:00
Nicholas Tindle
6e5df9e9e7 feat: add code cov 2023-04-22 00:45:29 -05:00
Richard Beales
780a77bb31 Merge pull request #2679 from AndresCdo/feature/add-error-exceptions
[feat] Update milvus_memory_test.py error log
2023-04-22 06:43:53 +01:00
Richard Beales
f342b84479 Merge pull request #2851 from sudouser777/fix/typo
fixed typo
2023-04-22 06:31:42 +01:00
ZHAOKAI WANG
019ac37d49 Merge branch 'master' into master 2023-04-22 10:40:22 +08:00
Steve
3ab67e746d Add file op tests (#2205)
Co-authored-by: Steven Byerly <stevenbyerly@microsoft.com>
2023-04-22 04:17:38 +02:00
Omri Grossman
e8aaba9ce2 Run pre commit manually to fix linting and sorting issues 2023-04-22 01:25:20 +03:00
Omri Grossman
f3ac658dd0 Reorder imports 2023-04-22 01:18:03 +03:00
Omri Grossman
7c4921758c Added basic spinner tests and modified spinner method docstring 2023-04-22 01:13:32 +03:00
Raju Komati
3bf5934b20 fixed typo 2023-04-22 02:52:13 +05:30
Richard Beales
a8fe3085fd Merge pull request #2558 from jlxip/master
Use readline if available
2023-04-21 21:17:14 +01:00
Richard Beales
14a1588ffd Merge pull request #2837 from BuildEverything/master
Update readme to more clearly describe usage between platforms
2023-04-21 21:05:42 +01:00
jlxip
504a85bbdb Use readline if available 2023-04-21 22:01:06 +02:00
Mikel Calvo
9dcdb6d6f8 Add OS Info into the initial prompt (#2587) 2023-04-21 21:44:02 +02:00
Tommy Brooks
1520816e61 chore: update readme to more clearly describe usage between platforms 2023-04-21 14:37:03 -04:00
Richard Beales
a2e16695af Merge pull request #2771 from ntindle/patch-2
[Hotfix] Fix coverage tooling
2023-04-21 19:30:23 +01:00
Richard Beales
e2b599051e Merge pull request #2802 from okunishinishi/patch-1
Update README.md ( `IMAGE_PROVIDER=sd`=> `IMAGE_PROVIDER=huggingface` )
2023-04-21 19:01:46 +01:00
Richard Beales
e20d388ec9 Merge pull request #2406 from pkqs90/docker-readme-upd
Fix docker usage readme
2023-04-21 19:00:24 +01:00
Richard Beales
44d3302b4e Merge pull request #2832 from k-boikov/bug/yml-tab-issue
fix indentation of bug template yml
2023-04-21 18:50:54 +01:00
Kris
72a56acfb8 fix indentation of bug template yml 2023-04-21 20:39:37 +03:00
Richard Beales
b975aaa848 Merge pull request #2785 from T-Higgins/master
Update README.md
2023-04-21 18:18:12 +01:00
Taka Okunishi
77de428524 Update README.md ( IMAGE_PROVIDER=sd=> IMAGE_PROVIDER=huggingface
modify snippet in README
2023-04-21 21:40:57 +09:00
coditamar
6c5d21cbfc config.py: make load_dotenv(override=True) (#2788) 2023-04-21 12:24:26 +02:00
Tony H
04093e9517 Update README.md
Made steps clearer, made some sentences clearer, and generally fixed grammar and punctuation.
Reason: I'm a Knowledge Base writer for software products.
2023-04-21 08:37:58 +01:00
pkqs90
8364426420 docker-compose 2023-04-21 14:18:09 +08:00
Nicholas Tindle
3dd07d3119 fix: workflow name 2023-04-21 01:02:10 -05:00
Nicholas Tindle
68803d559c comment the stuff 2023-04-21 01:00:02 -05:00
Nicholas Tindle
a63fc643c8 fix:? 2023-04-21 00:55:52 -05:00
Nicholas Tindle
7a9c6a52fa Update ci.yml 2023-04-21 00:49:07 -05:00
Nicholas Tindle
81de438569 try something new 2023-04-21 00:41:44 -05:00
Nicholas Tindle
185429287e Update ci.yml 2023-04-21 00:35:46 -05:00
Nicholas Tindle
c2f86f6934 Update ci.yml 2023-04-21 00:34:11 -05:00
Nicholas Tindle
7f99fa3da8 Update ci.yml 2023-04-21 00:30:39 -05:00
Nicholas Tindle
c58cf15565 hotfix: don't upload results on push 2023-04-21 00:27:19 -05:00
Richard Beales
4eaec80438 Merge pull request #2313 from lengweiping1983/arg_config_optimization
only adjust argument order
2023-04-21 06:17:51 +01:00
Richard Beales
7b22809530 Merge pull request #2545 from cryptidv/fixes
Added version select to bug template
2023-04-21 06:14:05 +01:00
Richard Beales
d573bee791 Merge pull request #2651 from riensen/patch-1
Improve plugin section in README.md to prevent dependency errors
2023-04-21 06:07:12 +01:00
ZHAOKAI WANG
e7c2a4068e Update file_operations.py 2023-04-21 13:06:44 +08:00
ZHAOKAI WANG
45a9ff6e74 Update file_operations.py 2023-04-21 13:03:52 +08:00
Richard Beales
781f2934e6 Merge pull request #2682 from AndresCdo/update-pre-commit-version
Update pre-commit version
2023-04-21 06:00:59 +01:00
Richard Beales
1b5743dc73 Merge pull request #2705 from itsmarble/add_show_env_fiile_instruct
add instruction to show .env
2023-04-21 05:59:48 +01:00
Richard Beales
26ee15d327 Merge pull request #2709 from Bsodoge/fix-typo
Update README.md
2023-04-21 05:54:09 +01:00
Richard Beales
78bddf3055 Merge pull request #2752 from chrisvxd/patch-1
docs: fix small typo in README
2023-04-21 05:51:11 +01:00
Richard Beales
de1ea5f916 Merge pull request #2758 from Pwuts/fix/silent-azure-fail
Make `load_azure_config` throw if `azure.yaml` does not exist
2023-04-21 05:48:09 +01:00
Richard Beales
d5162d332f Merge pull request #2628 from ntindle/ci/coverage-reporting
Add Coverage reporting to CI pipeline
2023-04-21 05:46:01 +01:00
lengweiping1983
63c2182870 Fix typo's (#2735)
Co-authored-by: lengweiping <lengweiping@vinotar.com>
2023-04-21 06:09:17 +02:00
Reinier van der Leer
b49ef913a8 Make load_azure_config throw if azure.yaml does not exist 2023-04-21 05:15:39 +02:00
Nick Foster
ec27d5729c Fix label of download_file command (#2753)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-21 04:55:20 +02:00
gklab
a2e75aabdd adjust file_operations.py code format 2023-04-21 10:19:28 +08:00
Chris Villa
6b7787ce99 docs: fix small typo in README 2023-04-21 14:19:00 +12:00
ZHAOKAI WANG
b05d56462b Merge branch 'Significant-Gravitas:master' into master 2023-04-21 10:01:03 +08:00
Andres Caicedo
558003704e Add missing size param to generate_image_with_dalle (#2691) 2023-04-21 04:00:44 +02:00
Toran Bruce Richards
00ecb983e7 Update README.md 2023-04-21 13:56:59 +12:00
Toran Bruce Richards
f26541188b Update README.md 2023-04-21 13:53:31 +12:00
Toran Bruce Richards
1e3bcc3f8b Update README.md 2023-04-21 12:52:22 +12:00
Torantulino
8faf4f5f79 Deploying to master from @ Significant-Gravitas/Auto-GPT@48f4119fb7 🚀 2023-04-21 00:40:07 +00:00
Toran Bruce Richards
48f4119fb7 Update sponsors_readme.yml 2023-04-21 12:38:18 +12:00
Toran Bruce Richards
ad6f18b737 Update sponsors_readme.yml 2023-04-21 12:31:37 +12:00
Toran Bruce Richards
68e479bdbd Update sponsors_readme.yml 2023-04-21 12:26:04 +12:00
Toran Bruce Richards
1dd8e570a5 Update sponsors_readme.yml 2023-04-21 12:24:18 +12:00
Toran Bruce Richards
511b0212c6 Update sponsors_readme.yml 2023-04-21 12:22:32 +12:00
Toran Bruce Richards
121e08c18e Create sponsors_readme.yml 2023-04-21 12:19:30 +12:00
Toran Bruce Richards
785c90ddb7 Remove hardcoded sponsors 2023-04-21 12:19:20 +12:00
BillSchumacher
d9d5fd5b9a Merge pull request #2727 from Pwuts/fix/spacy-install-model
fix #2654 spacy language model installation
2023-04-20 17:40:25 -05:00
Reinier van der Leer
c145d95312 Fix #2654 spacy language model installation 2023-04-20 23:58:40 +02:00
Andres Caicedo
37c5ebfe73 Merge remote-tracking branch 'upstream/master' into update-pre-commit-version 2023-04-20 20:16:09 +02:00
Ugo
0efa0d1185 Replace print statements with logging.error
This commit replaces two print statements in the _speech method of the BrianSpeech class with a single call to logging.error. This will log error messages with more detail and make it easier to diagnose issues. The changes are backward compatible and should not affect the functionality of the code.
2023-04-20 20:52:45 +03:00
Peter Banda
14d3ecaae7 Pin BeautifulSoup version to fix browse_website (#2680) 2023-04-20 19:51:52 +02:00
Bsodoge
25db6e56b0 Fix typo 2023-04-20 18:49:15 +01:00
itsmarble
e006a61c52 hotfix 2023-04-20 19:42:48 +02:00
itsmarble
5ecb08c8e8 add instruction to show .env 2023-04-20 19:26:55 +02:00
Richard Beales
0bf4987e1a Merge pull request #2644 from riensen/rename-whitelist
Use inclusive language: Rename 'blacklist' to 'denylist' and 'whitelist' to 'allowlist'
2023-04-20 18:19:52 +01:00
Andres Caicedo
3871fc70ce Merge remote-tracking branch 'upstream/master' into update-pre-commit-version 2023-04-20 19:01:25 +02:00
riensen
9b78e71d16 Use allowlist and denylist naming 2023-04-20 19:01:09 +02:00
Richard Beales
4c686f8fc0 Merge pull request #2667 from egonm12/update-readme-git-clone-command
doc: update git clone command to use stable branch
2023-04-20 17:47:47 +01:00
Andres Caicedo
9aacb68fbc Merge remote-tracking branch 'upstream/master' into feature/add-error-exceptions 2023-04-20 18:24:16 +02:00
Andres Caicedo
3de732508c Merge remote-tracking branch 'upstream/master' into update-pre-commit-version 2023-04-20 18:22:14 +02:00
Eddie Cohen
cf7544c146 Cancel in-progress docker CI on outdate (#2619)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-20 18:09:20 +02:00
Jartto
2a20ea638e Fix README ./run.sh start -> ./run.sh (#2523)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-20 18:07:53 +02:00
Andres Caicedo
6699a8ef38 Update .pre-commit-config.yaml
Update pre-commit-hooks to latest version v4.4.0
2023-04-20 16:49:11 +02:00
Andres Caicedo
f99c37aede Update milvus_memory_test.py
The 'err' variable in the except block is an instance of the ImportError class.
2023-04-20 16:42:34 +02:00
k-boikov
bb7ca692e3 include openapi-python-client in docker build (#2669)
Fixes #2658 "Docker image crashes on start"
2023-04-20 14:45:26 +02:00
Egon Meijers
c09ed61aba doc: update git clone command to use stable branch
Since master should not be used for installation as described in the readme, it would be better to checkout the stable branch immediately when cloning to prevent people from reporting issues that are not in the stable environment.
2023-04-20 14:22:24 +02:00
riensen
9f6d6f32a6 Update plugin instructions and improve clarity 2023-04-20 11:17:47 +02:00
Toran Bruce Richards
000389c762 Update README.md 2023-04-20 20:55:55 +12:00
Toran Bruce Richards
c963a209ab Update README.md 2023-04-20 20:23:03 +12:00
BillSchumacher
744c94c96a Lower label and command provided. 2023-04-20 02:22:54 -05:00
BillSchumacher
c561fe8925 Update app.py 2023-04-20 02:19:20 -05:00
Richard Beales
99eac6c1d9 Merge pull request #2272 from Significant-Gravitas/stable
Stable
2023-04-20 06:36:15 +01:00
Richard Beales
c4008971f7 Merge branch 'master' into stable 2023-04-20 06:32:59 +01:00
Nicholas Tindle
5155056198 feat: permissions 2023-04-20 00:25:48 -05:00
Nicholas Tindle
9cb4739e4a fix: syntax 2023-04-20 00:22:10 -05:00
Richard Beales
fe855fef13 Tweak Docker Hub push command 2023-04-20 06:22:02 +01:00
Nicholas Tindle
b9623ed424 fix: add new line back 2023-04-20 00:21:20 -05:00
Nicholas Tindle
7c45b21aa7 Update ci.yml 2023-04-20 00:11:43 -05:00
Richard Beales
c9bf95edf4 Merge pull request #2625 from Significant-Gravitas/stable-0.2.2
Stable 0.2.2 into Stable ready for release
2023-04-20 06:02:22 +01:00
Richard Beales
0fa9cf6eb0 Merge pull request #2624 from Significant-Gravitas/richbeales-patch-2
Patch docker hub CI task into Stable
2023-04-20 06:00:32 +01:00
BillSchumacher
bcda3c1a32 Merge pull request #1986 from Significant-Gravitas/richbeales-patch-2
Update docker-hub image push action
2023-04-19 23:57:56 -05:00
Pi
2f053fe9db Merge pull request #2605 from Pwuts/fix/pr-size-workflow
fix shirt-sizing workflow permissions
2023-04-20 02:32:59 +01:00
Reinier van der Leer
376db5a123 fix shirt-sizing workflow permissions 2023-04-20 03:20:28 +02:00
Toran Bruce Richards
3c23e7145d Update README.md 2023-04-20 13:00:41 +12:00
Toran Bruce Richards
981b6073e7 Update README.md 2023-04-20 12:40:40 +12:00
Nicholas Tindle
a82d49247a Shirt size labeling for PRs (#2467)
Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-20 02:07:41 +02:00
BillSchumacher
19f893e1e2 Merge pull request #757 from BillSchumacher/plugin-support
Plugin Support
2023-04-19 18:48:53 -05:00
BillSchumacher
c731675443 Fix url 2023-04-19 18:45:29 -05:00
BillSchumacher
d8fd834142 linting 2023-04-19 18:34:38 -05:00
BillSchumacher
d876de0bef Make tests a bit spicier and fix, maybe. 2023-04-19 18:32:49 -05:00
BillSchumacher
16f0e22ffa linting 2023-04-19 18:21:03 -05:00
BillSchumacher
d7679d755f Fix all commands and cleanup 2023-04-19 18:17:04 -05:00
BillSchumacher
23c650ca10 Merge branch 'master' of https://github.com/BillSchumacher/Auto-GPT into plugin-support 2023-04-19 17:28:17 -05:00
BillSchumacher
d5523600c7 Merge pull request #10 from riensen/plugin-support
Adding Allowlisted Plugins via .env
2023-04-19 17:22:36 -05:00
BillSchumacher
f5a2acd82a Merge pull request #2599 from Significant-Gravitas/master
Master -> Test 0.2.2
2023-04-19 17:14:27 -05:00
bszollosinagy
fa91bc154c Fix model context overflow issue (#2542)
Co-authored-by: batyu <batyu@localhost>
2023-04-19 23:28:57 +02:00
Toran Bruce Richards
a5a9b5dbd8 Update README.md 2023-04-20 09:23:18 +12:00
Toran Bruce Richards
cdbcd8596e Update README.md 2023-04-20 09:22:54 +12:00
Richard Beales
9b7719071f Merge pull request #2573 from Significant-Gravitas/fix/symlinks-in-workspace-path
fix(workspace): resolve symlinks in workspace path before checking
2023-04-19 21:49:03 +01:00
Richard Beales
a71ae26b52 Merge pull request #2576 from tejen/patch-1
Update README.md
2023-04-19 21:30:23 +01:00
Tejen Patel
66b5c760f4 Update README.md 2023-04-19 15:11:35 -04:00
Reinier van der Leer
37ff26ec2c fix(workspace): resolve symlinks in workspace path before checking 2023-04-19 20:58:14 +02:00
Richard Beales
a2723f16f2 Merge pull request #2448 from Pwuts/fix/default-config
Consolidate default config with config.py as master
2023-04-19 18:13:21 +01:00
Pi
a3aaf621fe Merge pull request #2562 from richbeales/master
Print warning to users of Python < 3.10
2023-04-19 18:10:12 +01:00
Richard Beales
903e21b2dd Merge branch 'Significant-Gravitas:master' into master 2023-04-19 18:06:26 +01:00
Richard Beales
0400d72824 Print a warning if current py version < 3.10 2023-04-19 18:05:56 +01:00
Reinier van der Leer
e08b4d601f Set WIPE_REDIS_ON_START default True 2023-04-19 18:37:05 +02:00
Reinier van der Leer
20bd2de54a Add headless browser setting 2023-04-19 18:19:39 +02:00
Reinier van der Leer
52233dff50 Merge branch 'master' into fix/default-config 2023-04-19 18:13:41 +02:00
0xArty
1f3cd214e6 Merge pull request #2339 from mikekelly/dont-install-sorcery-on-docker
Don't install sorcery on docker
2023-04-19 16:59:20 +01:00
Reinier van der Leer
0d7ab414d9 Merge pull request #2355 from yunzheng1112/fix-azure-config
* fix path of Azure config file
* default azure_api_type -> azure
2023-04-19 17:25:23 +02:00
Reinier van der Leer
7eed489ea1 Merge pull request #2351 from zvrr/zvrr-patch-1
fix azure_model_to_deployment_id_map type (list -> dict)
2023-04-19 17:23:40 +02:00
Reinier van der Leer
45f490e0ad llm_utils: revert changing deployment_id parameter to engine 2023-04-19 17:21:06 +02:00
Mike Kelly
bb2066df04 remove sorcery 2023-04-19 16:04:48 +01:00
Eesa Hamza
ec945d1022 Fixed links 2023-04-19 17:59:17 +03:00
Eesa Hamza
9240a554f1 Added version select to bug template 2023-04-19 17:55:36 +03:00
Reinier van der Leer
6cecb9766a Merge pull request #2321 from zzzgydi/fix-system-prompt
fix: remove duplicate task complete command prompt
2023-04-19 16:42:32 +02:00
Reinier van der Leer
d9cb000f65 Merge pull request #2324 from itaihochman/iss1211
* Use BROWSE_MAX_CHUNK_LENGTH for chunking text
* Fix Issue #1211: GPT-3.5 token limit is lower than the default
2023-04-19 16:40:55 +02:00
Toran Bruce Richards
d163c564e5 Update README.md 2023-04-19 23:33:44 +12:00
Toran Bruce Richards
d4cef97e2f Update README.md 2023-04-19 23:30:15 +12:00
Toran Bruce Richards
ce8dfcc604 update e-sponsors 2023-04-19 23:29:33 +12:00
Toran Bruce Richards
a56459fee3 Update enteprise-sponsors 2023-04-19 23:24:48 +12:00
jazelly
fa8562bc0c misc: fix typo in benchmark 2023-04-19 20:47:36 +09:30
riensen
c5b81b5e10 Adding Allowlisted Plugins via env 2023-04-19 12:50:00 +02:00
Richard Beales
fdd79223b0 Merge pull request #2495 from Explorergt92/patch-3
Update README.md Windows run.bat instructions
2023-04-19 07:20:58 +01:00
BillSchumacher
a053bb074a Merge pull request #2494 from richbeales/master
Print the current Git branch on startup - warn if unsupported
2023-04-19 01:00:38 -05:00
John
598eea9851 Update README.md
Correcting the cause of issue #2476
2023-04-19 01:57:47 -04:00
Richard Beales
4ba46307f7 Print the current Git branch on startup - warn if unsupported 2023-04-19 06:57:15 +01:00
Reinier van der Leer
8581ee2c0c Merge branch 'master' into fix/default-config 2023-04-19 02:51:22 +02:00
BillSchumacher
ecf2ba12db Merge pull request #2032 from bingoko/master
refactoring all json utilities
2023-04-18 19:46:42 -05:00
BillSchumacher
6e94409594 linting 2023-04-18 19:40:14 -05:00
BillSchumacher
239d64a602 Merge branch 'master' of https://github.com/Significant-Gravitas/Auto-GPT into pr/2032 2023-04-18 19:39:11 -05:00
Walter Nasich
f582d9ca49 Delete unused folder /outputs/ (#1130)
Delete unused folder /outputs/ as it is no being used to store output files
2023-04-19 02:36:32 +02:00
BillSchumacher
fdaa55a452 Merge pull request #1477 from Tymec/feature/more-image-gen
Image generation improvements
2023-04-18 19:29:24 -05:00
BillSchumacher
aeb1178a47 linting 2023-04-18 19:26:18 -05:00
BillSchumacher
5b86682e24 Skip imagegen tests in CI 2023-04-18 19:24:13 -05:00
BillSchumacher
7086961e00 Merge branch 'master' of https://github.com/Significant-Gravitas/Auto-GPT into pr/1477 2023-04-18 19:20:24 -05:00
Will Callender
8532307b2f Rename evaluate_code to analyze_code (#1371)
ChatGPT is less confused by this phrasing

From my own observations and others (ie  #101 and #286) ChatGPT seems to think that `evaluate_code` will actually run code, rather than just provide feedback. Since changing the phrasing to `analyze_code` I haven't seen the AI make this mistake.

---------

Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-19 02:16:08 +02:00
BillSchumacher
4c7b582454 apply black 2023-04-18 19:09:15 -05:00
BillSchumacher
3f2d14f4d8 Fix isort? 2023-04-18 19:07:39 -05:00
Reinier van der Leer
2db4a5da57 Merge branch 'master' into fix/default-config 2023-04-19 02:04:11 +02:00
BillSchumacher
221a4b0b50 I guess linux doesn't like this.... 2023-04-18 19:02:10 -05:00
BillSchumacher
86d3444fb8 isort, add proper skips. 2023-04-18 18:59:23 -05:00
BillSchumacher
4701357a21 fix test 2023-04-18 18:56:11 -05:00
Josh XT
9514919d37 Option to disable working directory restrictions (#1875)
Remove restriction on working directory if RESTRICT_TO_WORKSPACE != True

---------

Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-19 01:54:38 +02:00
BillSchumacher
5813592206 fix readme 2023-04-18 18:51:28 -05:00
BillSchumacher
7d45de8901 fix merge 2023-04-18 18:48:44 -05:00
Tymec
ac023e95c0 fix: remove "wait-for-model" header from hf request 2023-04-19 01:46:24 +02:00
BillSchumacher
085842d43c Merge branch 'master' of https://github.com/BillSchumacher/Auto-GPT into plugin-support 2023-04-18 18:44:40 -05:00
Drikus Roor
24d5e1fc8a Ensure Python 3.10 & 3.11 compatability (#1815)
CI: Ensure compatability with Python 3.10 & 3.11

---------

Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-19 01:38:42 +02:00
Tymec
da4c765378 test: added unit test 2023-04-19 01:38:31 +02:00
Will Callender
74aa4add1b fix(python-run): prompt users to install Docker when execute_python_file encounters a Docker error (#2231)
fix(python-run): make error message more explicit

---------

Co-authored-by: Reinier van der Leer <github@pwuts.nl>
2023-04-19 01:37:31 +02:00
Tymec
5576994c2c fix: merge conflicts 2023-04-19 01:30:28 +02:00
Reinier van der Leer
e2accab87e Move to Python 3.10 & improve CI workflow (#2369)
* Use Python 3.10 in CI, benchmark, devcontainer, docker config, .sourcery.yaml
* Improve Python CI workflow
2023-04-19 01:27:29 +02:00
BillSchumacher
b188c2b3e3 Merge pull request #4 from evahteev/_openai-plugin-support
[WIP] Openai plugins support
2023-04-18 18:20:37 -05:00
BillSchumacher
ebee041c35 fix merge 2023-04-18 18:18:15 -05:00
Reinier van der Leer
8020eaa2e9 Merge pull request #1473 from ickma/support-headless-chrome-mode
Add support for running Chrome in Headless mode.
2023-04-19 01:14:49 +02:00
BillSchumacher
59a9986786 Merge branch 'plugin-support' of https://github.com/BillSchumacher/Auto-GPT into pr/4 2023-04-18 18:12:35 -05:00
BillSchumacher
ef0216dbe7 Merge pull request #6 from TaylorBeeston/type-fixes
Type fixes
2023-04-18 17:57:23 -05:00
lengweiping1983
ae7b81dc50 Merge branch 'master' into arg_config_optimization 2023-04-19 06:48:31 +08:00
Reinier van der Leer
78734dade8 Consolidate default config with config.py as master 2023-04-18 23:40:43 +02:00
Pi
89539d0cf1 Merge pull request #2441 from richbeales/master
Hotfix - Announcement filename was incorrect
2023-04-18 22:20:43 +01:00
Richard Beales
1887f51516 Merge branch 'Significant-Gravitas:master' into master 2023-04-18 22:16:29 +01:00
Richard Beales
3ebe125d3f Bugfix - filename for announcement was wrong 2023-04-18 22:16:11 +01:00
Pi
bed0860c71 Merge pull request #2429 from richbeales/master
Added ability to output news/announcements on startup
2023-04-18 21:48:15 +01:00
Richard Beales
88ebebf74f Implement suggestions from pi - save current news to file 2023-04-18 21:45:09 +01:00
Evgeny Vakhteev
49e4b75039 removing accidentially commited ./docker 2023-04-18 13:16:10 -07:00
Evgeny Vakhteev
c62c8c6e71 merge BillSchumacher/plugin-support, conflicts 2023-04-18 13:13:38 -07:00
Evgeny Vakhteev
894026cdd4 reshaping code and fixing tests 2023-04-18 12:52:09 -07:00
Richard Beales
913c933e8c isort 2023-04-18 20:13:31 +01:00
Richard Beales
90e6a55e37 Black formatting 2023-04-18 20:11:26 +01:00
Richard Beales
4a07790910 Added ability to output news/announcements on startup 2023-04-18 20:09:07 +01:00
BillSchumacher
5752a466a2 Merge pull request #2318 from ezolenko/bugs/execute_shell_popen
Fix for execute_shell_popen using WORKING_DIRECTORY
2023-04-18 13:07:27 -05:00
0xArty
b5f1ba0df1 Merge pull request #2415 from cryptidv/template-updates
[Hotfix] bugs template again
2023-04-18 17:59:03 +01:00
Eesa Hamza
5c55c35821 Hotfix bugs template again 2023-04-18 19:57:24 +03:00
0xArty
efb4429d33 Merge pull request #2408 from cryptidv/template-updates
Hotfix bugs template
2023-04-18 17:55:15 +01:00
Eesa Hamza
285188bdde Added required under 'validations' 2023-04-18 19:52:15 +03:00
Eesa Hamza
4ca8b376b6 Fix label being required 2023-04-18 19:25:46 +03:00
Eesa Hamza
61f5925502 Hotfix bugs template 2023-04-18 19:13:27 +03:00
Pi
8d9505cda5 Merge pull request #2375 from cryptidv/template-updates
Improve the Issue Templates
2023-04-18 17:07:53 +01:00
pkqs90
4cc90b8eb4 Fix docker usage readme 2023-04-19 00:01:26 +08:00
lengweiping
09e29f1e1b fix conflicts 2023-04-18 23:37:17 +08:00
Eesa Hamza
17caf226d0 Slight fixes 2023-04-18 17:57:40 +03:00
Eesa Hamza
7a6eb19b1c Fixed bugs template and added suggestions 2023-04-18 17:55:09 +03:00
Eesa Hamza
c846c1d331 Fixed and added suggestions to bugs template 2023-04-18 17:53:49 +03:00
Reinier van der Leer
d6b1aa677d consolidate browser settings 2023-04-18 16:46:58 +02:00
0xArty
fd4a2ed414 Merge pull request #2373 from 0xArty/click-arg-pasing
Use click to parse arguments
2023-04-18 15:16:53 +01:00
Reinier van der Leer
9d14b113a3 Merge remote-tracking branch 'origin/master' into support-headless-chrome-mode 2023-04-18 16:01:45 +02:00
EH
0c8467c404 Update 1.bug.yml 2023-04-18 14:34:24 +01:00
0xArty
a2c0db44d6 moved cli into seperate file 2023-04-18 14:33:55 +01:00
Eesa Hamza
7286ef3a52 Spruced up the bug issue template 2023-04-18 16:07:15 +03:00
0xArty
6f87fb63c1 Runs agent as default command 2023-04-18 13:54:06 +01:00
EH
fbdf9d4bd4 docs: add warning for non-essential contributions (#2359) 2023-04-18 14:21:57 +02:00
0xArty
b5378174f3 Switched to using click 2023-04-18 13:19:17 +01:00
Yun Zheng
c1fe34adcb Fix azure_api_type in azure template 2023-04-18 17:24:59 +08:00
zvrr
f7014e8773 Update config.py
azure_model_to_deployment_id_map default type should be a dict, not list
2023-04-18 17:06:58 +08:00
Yun Zheng
fc6070d574 Fix Azure Config file location 2023-04-18 17:03:48 +08:00
Richard Beales
4c2a566acc Merge pull request #2327 from Significant-Gravitas/automatic-CI
Make Continuous Integration Automatic
2023-04-18 08:20:15 +01:00
Toran Bruce Richards
7ac296081c Add pull_request_target to CI trigger 2023-04-18 19:11:09 +12:00
Toran Bruce Richards
525073bb94 Change on PR to all branches 2023-04-18 18:46:50 +12:00
Toran Bruce Richards
0664b737ab Updates sponsors 2023-04-18 18:11:56 +12:00
itaihochman
e34ede79b9 Add an option to set the chunk size using the
configoration - BROWSE_CHUNK_MAX_LENGTH=4000
This way, we can avoid errors of exceeding chunk size when using gpt-3.5
2023-04-18 08:56:00 +03:00
GyDi
a0160eef0c fix: remove duplicate task complete prompt 2023-04-18 13:51:16 +08:00
Taylor Beeston
b84de4f7f8 ♻️ Use AutoGPT template package for the plugin type 2023-04-17 22:10:40 -07:00
lengweiping
275b2eaae1 only adjust the order, so argument definitions are consistent with the logical order 2023-04-18 13:06:09 +08:00
Eugene Zolenko
a88113de33 Fix for execute_shell_popen using WORKING_DIRECTORY
Looks like things got changed to WORKSPACE_PATH recently?
2023-04-17 23:02:07 -06:00
Evgeny Vakhteev
9fd80a8660 tests, model 2023-04-17 20:51:27 -07:00
Evgeny Vakhteev
193c80849f separating OpenAI Plugin base class 2023-04-17 18:42:42 -07:00
Evgeny Vakhteev
9ed5e0f1fc adding plugin interface instantiation 2023-04-17 17:13:53 -07:00
bingokon
6787c2eeed fix json_schemas not found error 2023-04-18 00:17:42 +01:00
bingokon
31900f6733 Merge remote-tracking branch 'upstream/master'
# Conflicts:
#	autogpt/app.py
#	autogpt/json_fixes/auto_fix.py
#	autogpt/json_fixes/bracket_termination.py
#	autogpt/json_fixes/master_json_fix_method.py
#	autogpt/json_utils/json_fix_llm.py
#	autogpt/json_utils/utilities.py
2023-04-18 00:01:58 +01:00
BillSchumacher
67846bad21 Merge pull request #2192 from merwanehamadi/feature/change-flake-config
Align flake, black and isort to pipelines and precommit hooks
2023-04-17 17:28:12 -05:00
Evgeny Vakhteev
7f4e38844f adding openai plugin loader 2023-04-17 14:57:55 -07:00
Merwane Hamadi
da65bc3f68 black 2023-04-17 13:47:38 -07:00
Merwane Hamadi
cf9a94a8b6 isort implemented 2023-04-17 13:42:01 -07:00
Merwane Hamadi
9577468f0c remove isort 2023-04-17 12:58:05 -07:00
Merwane Hamadi
3134beb983 Configure isort settings in pyproject.toml and remove tool.setuptools 2023-04-17 12:51:12 -07:00
Merwane Hamadi
254cd69748 Update CI workflow to use flake8, black, and isort formatting checks 2023-04-17 12:50:21 -07:00
Merwane Hamadi
2f4ef3ba6a Update pre-commit hooks with isort, black, and local pytest-check 2023-04-17 12:49:56 -07:00
Sourcery AI
9705f60dd3 'Refactored by Sourcery' 2023-04-17 19:44:54 +00:00
Richard Beales
9cb1555ade Merge pull request #2193 from Lootheo/betterparsing
changed rstrip for strip and added case for empty string
2023-04-17 20:43:25 +01:00
Taylor Beeston
ea67b6772c 🐛 Minor type fixes 2023-04-17 12:42:17 -07:00
Taylor Beeston
f784049079 🏷️ Type plugins field in config 2023-04-17 12:41:34 -07:00
Taylor Beeston
d23ada30d7 🐛 Fix on_planning 2023-04-17 12:41:17 -07:00
Taylor Beeston
dea5000a01 🐛 Fix pre_instruction 2023-04-17 12:40:46 -07:00
Taylor Beeston
239aa3aa02 🎨 Bring in plugin_template
This would ideally be a shared package
2023-04-17 12:39:18 -07:00
Richard Beales
75baa11e81 Merge pull request #2227 from Tmpecho/patch-1
Added return type hint to execute_code.py file
2023-04-17 20:30:21 +01:00
Richard Beales
e40d6f3314 Merge pull request #2050 from tzengyuxio/master
fix: unreadable text in console and potentially over the max token
2023-04-17 20:23:35 +01:00
Richard Beales
e849e4ff0b Merge pull request #1836 from cs0lar/fix/weaviate_index_to_classname
fixes Weaviate index name to classname conversion
2023-04-17 20:21:39 +01:00
Richard Beales
6222b2d542 Merge pull request #1474 from bszollosinagy/allow_easy_setup
Allow local Development without pip install using "pip install -e ."
2023-04-17 20:18:21 +01:00
Tmpecho
9c062b44aa Added return type hint to functions 2023-04-17 20:46:47 +02:00
Richard Beales
cd587bc406 Merge pull request #2096 from cryptidv/fix-linux-selenium
Add Fixes for Selenium Browsing On Linux
2023-04-17 19:37:16 +01:00
Richard Beales
935481c4b5 Merge pull request #2093 from lengweiping1983/master
memory object move to memory_add block
2023-04-17 19:32:18 +01:00
Richard Beales
d063436b0a Merge pull request #2176 from hamidzr/patch-1
docs: update docs around Milvus
2023-04-17 19:26:04 +01:00
Richard Beales
3f0b84eb7b Merge pull request #2183 from aminghani/fix-bug-imoprt123
added missing import
2023-04-17 19:22:20 +01:00
BillSchumacher
64e05778ef Merge pull request #1987 from cryptidv/fix-oai-error-msgs
Improve the error logging for OAI Issues
2023-04-17 12:47:28 -05:00
BillSchumacher
6cbf00df60 Merge pull request #2217 from Pwuts/patch-3
fix(pr-label): concurrency group cannot be empty
2023-04-17 12:21:51 -05:00
Reinier van der Leer
3b37c89d88 fix(pr-label): concurrency group cannot be empty 2023-04-17 19:15:20 +02:00
Richard Beales
d4b74661aa Merge pull request #2168 from H-jj-R/master
Spelling Fixes
2023-04-17 18:02:09 +01:00
Richard Beales
ee224c395e Merge pull request #2172 from Funkelfetisch/patch-1
Update .env.template
2023-04-17 17:57:57 +01:00
Richard Beales
9dea8b1f66 Merge pull request #2105 from gabrielrbarbosa/fix-brian-tts-speech-exception
Fix BRIAN_TTS - Prevent TypeError exception in _speech method
2023-04-17 17:46:21 +01:00
Richard Beales
efc7b4deb6 Merge pull request #2024 from deece/master
Remove requirements-docker.txt
2023-04-17 17:43:01 +01:00
Richard Beales
e6ef12d313 Merge pull request #2153 from bobvanluijt/patch-2
Update README.md with Weaviate installation and reference
2023-04-17 17:41:32 +01:00
Richard Beales
a5506abdad Merge pull request #2137 from suzuken/config-fix-openai-link
config.py: update OpenAI link to platform.openai.com
2023-04-17 17:40:48 +01:00
Richard Beales
cf2c3fde41 Merge pull request #2132 from mawsyh/master
Update .env.template
2023-04-17 17:39:58 +01:00
Richard Beales
ed89d9f801 Merge pull request #2129 from XFFXFF/fix_missing_import
fix a missing import
2023-04-17 17:39:08 +01:00
Richard Beales
2bb0ecf497 Merge pull request #2203 from Pwuts/patch-2
fix(pr-label): mitigate excessive concurrent runs
2023-04-17 17:35:04 +01:00
Evgeny Vakhteev
08ad320d19 moving load plugins into plugins from main, adding tests 2023-04-17 09:33:01 -07:00
Reinier van der Leer
ef7b417105 fix(pr-label): mitigate excessive concurrent runs 2023-04-17 18:16:37 +02:00
jingxing
f2baa0872b config.py format 2023-04-17 17:16:14 +01:00
Steve Byerly
8637b8b61b whitespace 2023-04-17 17:12:23 +01:00
Steve Byerly
6ac9ce614a whitespace 2023-04-17 17:12:23 +01:00
Steve Byerly
bd670b4db3 whitespace 2023-04-17 17:12:23 +01:00
Steve Byerly
def96ffe2f fix split file 2023-04-17 17:12:23 +01:00
Tom Kaitchuck
6b64158356 Unbound summary size
Signed-off-by: Tom Kaitchuck <Tom.Kaitchuck@gmail.com>
2023-04-17 17:08:21 +01:00
jimmycliff obonyo
23e7031326 install chrome/firefox for headless browing when running in docker container 2023-04-17 17:06:56 +01:00
REal0day
a2a6f84f13 internal resource request bug 2023-04-17 17:05:45 +01:00
BillSchumacher
316f37bfce Merge pull request #2198 from Pwuts/patch-1
fix(pr-label): set job permissions explicitly
2023-04-17 10:51:02 -05:00
Reinier van der Leer
e7c3ff9b9e fix(pr-label): set job permissions explicitly 2023-04-17 17:47:58 +02:00
rickythefox
baf31e69e5 Use python:3-alpine image for code execution (#1192) 2023-04-17 16:45:23 +01:00
BillSchumacher
7fd55fa2f4 Merge pull request #2195 from Pwuts/patch-1
feat(pr-labels): auto-label conflicting PRs
2023-04-17 10:36:53 -05:00
Reinier van der Leer
35106ef662 feat(pr-labels): auto-label conflicting PRs 2023-04-17 17:33:50 +02:00
lfricken
d4860fe9f0 Don't incapacitate yourself! (#1240)
* subprocesses

* fix lint

* fix more lint

* fix merge

* fix merge again
2023-04-17 16:27:53 +01:00
superherointj
d47466ddf9 Add Nix flakes support through direnv
* Nix (https://nixos.org) is a reproducible build system.
* Enables Nix users to use/develop Auto-GPT, without installing PIP or any other future Auto-GPT dependency.
2023-04-17 16:22:46 +01:00
Manuel Otheo
57ee84437b changed break for continue 2023-04-17 09:20:52 -06:00
Manuel Otheo
286edbbb8c changed rstrip for strip and added case for empty string
changed rstrip for strip and added case for empty string in agent.py
2023-04-17 09:17:07 -06:00
lengweiping1983
00ba50bcb4 Merge branch 'Significant-Gravitas:master' into master 2023-04-17 23:04:40 +08:00
Acer
1d49b87e48 added missing import 2023-04-17 18:34:11 +04:30
Hamid Zare
6700ac94fa docs: update docs
fix a typo
2023-04-17 09:28:32 -04:00
NEBULITE Berlin
10b2458f58 Update .env.template
"redis" as hostname for redis to correctly use the docker compose internal networking feature
2023-04-17 14:50:28 +02:00
EH
2c55ff0b3d Update web_selenium.py 2023-04-17 15:43:14 +03:00
Eesa Hamza
9887016bdf Move under chrome 2023-04-17 15:39:04 +03:00
H-jj-R
a0b0a4cec5 Merge remote-tracking branch 'origin/master' 2023-04-17 13:26:18 +01:00
H-jj-R
8dadf79614 Spelling fixes 2023-04-17 13:25:49 +01:00
Eesa Hamza
10cd0f3362 Add the OpenAI API Keys Configuration to the top of the readme 2023-04-17 13:14:58 +01:00
BingokoN
d82ca101de Resolved merge conflict: Deleted autogpt/json_fixes/auto_fix.py as in HEAD 2023-04-17 12:25:55 +01:00
BingokoN
0d2e196368 refactoring/splitting the json fix functions into general module and llm module which need AI's assistance. 2023-04-17 12:14:43 +01:00
Bob van Luijt
125f0ba61a Update README.md with Weaviate installation and reference 2023-04-17 12:46:27 +02:00
suzuken
74a8b5d832 config.py: update OpenAI link 2023-04-17 18:15:49 +09:00
Mad Misaghi
bd25822b35 Update .env.template
addedMilvus
2023-04-17 12:24:27 +03:30
XFFXFF
2b87245e22 fix a missing import 2023-04-17 16:21:52 +08:00
Alastair D'Silva
60b779a905 Remove requirements-docker.txt
This file needs to be maintained parallel to requirements.txt, but
isn't, causes problems when new dependencies are introduced.

Instead, derive the Docker dependencies from the stock ones.

Signed-off-by: Alastair D'Silva <alastair@d-silva.org>
2023-04-17 17:09:13 +10:00
Richard Beales
f41febd3ae Merge pull request #2083 from Song2017/master
add docker requirements - jsonschema
2023-04-17 07:37:29 +01:00
jingxing
1001e5489e config.py format 2023-04-17 14:24:10 +08:00
Richard Beales
be712fc606 Merge pull request #905 from endolith/noqa
Remove irrelevant noqa comments
2023-04-17 07:14:41 +01:00
Richard Beales
cf25831ad5 Merge pull request #2003 from BatesJernigan/format-error-msg
feat: (aesthetic) add space on warning message
2023-04-17 07:13:47 +01:00
Richard Beales
6dbe84a1bf Merge pull request #1555 from nolan23/update-docs
move comment to correct position
2023-04-17 07:12:25 +01:00
Richard Beales
eefbccd957 Merge pull request #2012 from 0xf333/0xf333_branch
Fix: Update run_continuous.sh to correctly pass all command-line arguments
2023-04-17 07:09:45 +01:00
BillSchumacher
fe85f079b0 Fix early abort 2023-04-17 01:09:17 -05:00
Richard Beales
2cb559ebdd Merge pull request #2061 from wangxuqi/milvus_memory_test_fix
Fix milvus test Error: 'NameError: name 'MockConfig' is not defined'
2023-04-17 07:08:41 +01:00
Gabriel R. Barbosa
64383776a2 Update brian.py - Prevent TypeError exception
TypeError: BrianSpeech._speech() takes 2 positional arguments but 3 were given.

Use the same arguments as used in _speech method from gtts.py
2023-04-17 03:04:35 -03:00
BillSchumacher
8386188356 Fix early abort 2023-04-17 00:49:51 -05:00
Richard Beales
664f896696 Update dockerhub-imagepush.yml 2023-04-17 06:22:19 +01:00
Eesa Hamza
e86764df45 Add linux selenium fixes 2023-04-17 07:55:48 +03:00
lengweiping
71c6600abf memory object move to memory_add block 2023-04-17 12:44:46 +08:00
BillSchumacher
fbd4e06df5 Add early abort functions. 2023-04-16 23:39:33 -05:00
BillSchumacher
3715ebc7eb Add hooks for chat completion 2023-04-16 23:30:42 -05:00
BillSchumacher
d394b032d7 Fix test 2023-04-16 23:23:31 -05:00
BillSchumacher
23d3dafc51 Maybe fix tests, fix safe_path function. 2023-04-16 23:18:29 -05:00
BillSchumacher
708374d95b fix linting 2023-04-16 22:56:34 -05:00
Ben Song
0fa8073947 add docker requirements - jsonschema 2023-04-17 11:53:05 +08:00
BillSchumacher
81c65af560 blacked 2023-04-16 22:51:39 -05:00
BillSchumacher
c0aa423d7b Fix agent remembering do nothing command, use correct google function, disabled image_gen if not configured. 2023-04-16 22:46:38 -05:00
BillSchumacher
03c137741a Merge branch 'master' of https://github.com/BillSchumacher/Auto-GPT into plugin-support 2023-04-16 22:13:37 -05:00
BillSchumacher
c110f3489d Finish integrating command registry 2023-04-16 21:51:36 -05:00
EH
9589334a30 Add File Downloading Capabilities (#1680)
* Added 'download_file' command

* Added util and fixed spinner

* Fixed comma and added autogpt/auto_gpt_workspace to .gitignore

* Fix linter issues

* Fix more linter issues

* Fix Lint Issues

* Added 'download_file' command

* Added util and fixed spinner

* Fixed comma and added autogpt/auto_gpt_workspace to .gitignore

* Fix linter issues

* Fix more linter issues

* Conditionally add the 'download_file' prompt

* Update args.py

* Removed Duplicate Prompt

* Switched to using path_in_workspace function
2023-04-17 03:34:02 +01:00
Void&Null
0409079983 Added Credit to README.md Demo 2023-04-17 03:30:19 +01:00
xuqi.wxq
1d4dc0c534 Fix milvus test Error: 'NameError: name 'MockConfig' is not defined' 2023-04-17 10:17:26 +08:00
BillSchumacher
6fb1369939 Merge pull request #2022 from AdrianScott/patch-1
Added one space after period for better formatting
2023-04-16 20:53:40 -05:00
Void&Null
9ffa587f6f Implement new demo video into read me 2023-04-17 02:46:30 +01:00
BillSchumacher
42f81c62dc Merge pull request #2040 from Pwuts/patch-1
Clean up README
2023-04-16 20:34:04 -05:00
Tzeng Yuxio
da72e69196 fix: unreadable text in console and potentially over the max token 2023-04-17 09:28:33 +08:00
Reinier van der Leer
24648fb537 Add Get Help header in README 2023-04-17 03:21:46 +02:00
Reinier van der Leer
56ecbeeef7 Clean up README 2023-04-17 02:22:18 +02:00
bingokon
7a32e03bd5 refactoring the all json utilities 2023-04-17 00:48:53 +01:00
Chris Cheney
15059c2090 ensure git operations occur in the working directory 2023-04-17 00:31:54 +01:00
Adrian Scott
c71c61dc58 Added one space after period for better formatting 2023-04-16 18:14:16 -05:00
Merwane Hamadi
1513be4acd hotfix user input 2023-04-16 23:47:44 +01:00
0xf333
4eb8e7823d Fix: Remove quotes around $@ in run_continuous.sh
Description:
Per maintainer's request, removed quotes around `$@` in `run_continuous.sh`.
This change allows the script to forward arguments as is. Please note that
this modification might cause issues if any of the command-line arguments
contain spaces or special characters. However, this update aligns with the
preferred format for the repository.

Suggestion from:
https://github.com/Significant-Gravitas/Auto-GPT/pull/1941#discussion_r1168035557
2023-04-16 18:10:45 -04:00
0x333
30e7693b24 Merge branch 'Significant-Gravitas:master' into 0xf333_branch 2023-04-16 18:10:19 -04:00
k-boikov
4f33e1bf89 add utf-8 encoding to file handlers for logging 2023-04-16 23:09:14 +01:00
Merwane Hamadi
89e0e89927 change master prompt to system prompt 2023-04-16 23:07:06 +01:00
Merwane Hamadi
3b80253fb3 Update process creation in benchmark script 2023-04-16 23:07:06 +01:00
Merwane Hamadi
b5e0127b16 Only print JSON object validation message in debug mode 2023-04-16 23:07:06 +01:00
Merwane Hamadi
b50259c25d Update variable names, improve comments, and modify input handling in agent.py 2023-04-16 23:07:06 +01:00
Merwane Hamadi
21ccaf2ce8 Refactor variable names and remove unnecessary blank lines in __main__.py 2023-04-16 23:07:06 +01:00
0x333
b0accbfe58 Merge branch 'Significant-Gravitas:master' into 0xf333_branch 2023-04-16 18:06:44 -04:00
Benedict Hobart
8f0d553e4e Improve dev containers so autogpt can browse the web 2023-04-16 22:59:43 +01:00
0x333
b8baa549cc Merge branch 'Significant-Gravitas:master' into 0xf333_branch 2023-04-16 17:57:05 -04:00
endolith
5ff7fc340b Remove extraneous noqa E722 comment
E722 is "Do not use bare except, specify exception instead" but
except json.JSONDecodeError
is not a bare except
2023-04-16 17:21:34 -04:00
0xArty
955a5b0a43 Marked local chache tests as integration tests as they require api keys 2023-04-16 22:17:03 +01:00
0xArty
147d3733bf Change ci to pytest 2023-04-16 22:17:03 +01:00
0xf333
4269326ddf Fix: Update run_continuous.sh to pass all command-line arguments
Description:

- Modified `run_continuous.sh` to include the `--continuous` flag directly in the command:
  - Removed the unused `argument` variable.
  - Added the `--continuous` flag to the `./run.sh` command.
  - Ensured all command-line arguments are passed through to `run.sh` and the `autogpt` module.

This change improves the usability of the `run_continuous.sh` script by allowing users to provide
additional command-line arguments along with the `--continuous` flag. It ensures that all arguments
are properly passed to the `run.sh` script and eventually to the `autogpt` module, preventing
confusion and providing more flexible usage.

Suggestion from:
https://github.com/Significant-Gravitas/Auto-GPT/pull/1941#discussion_r1167977442
2023-04-16 17:03:18 -04:00
0xArty
627533bed6 minimall add pytest (#1859)
* minimall add pytest

* updated docs and pytest command

* proveted milvus integration test running if milvus is not installed
2023-04-16 21:55:53 +01:00
BillSchumacher
167628c696 Add fields to disable the command if needed by configuration, blacked. 2023-04-16 15:49:36 -05:00
BillSchumacher
df5cc3303f move tests and cleanup. 2023-04-16 15:35:25 -05:00
Bates Jernigan
7b7d7c1d74 add space on warning message 2023-04-16 16:33:52 -04:00
BillSchumacher
ec8ff0fcde Merge branch 'command_registry' of https://github.com/kreneskyp/Auto-GPT into plugin-support 2023-04-16 15:25:21 -05:00
Richard Beales
7d9269e1a1 Merge pull request #1679 from Slowly-Grokking/master
data_ingestion.py 'no module named 'autogpt'" fix and ReadMe update
2023-04-16 21:17:45 +01:00
Richard Beales
33aae9ab17 Merge pull request #1743 from lonrun/lonrundev
Add run scripts for shell
2023-04-16 20:59:01 +01:00
Richard Beales
399690e1ef Merge pull request #1925 from MrBrain295/patch-1
Fix README.md
2023-04-16 20:58:01 +01:00
Slowly-Grokking
c8a349a573 Merge branch 'Significant-Gravitas:master' into master 2023-04-16 14:57:14 -05:00
Richard Beales
5ef103b68a Merge pull request #1977 from cryptidv/fix-memory
Fixed new backends not being added to supported memory
2023-04-16 20:54:14 +01:00
BillSchumacher
176f74bd3a Merge pull request #1866 from merwanehamadi/feature/benchmark
benchmark json errors, clean json parsing code and implement json schema
2023-04-16 14:35:08 -05:00
Richard Beales
34f9bc40b3 Merge pull request #1983 from jakubbober/patch-1
Add "Memory Backend Setup" subtitle
2023-04-16 20:24:10 +01:00
Eesa Hamza
2d24876530 Fix linter issues 2023-04-16 22:16:43 +03:00
BillSchumacher
3fadf2c90b Blacked 2023-04-16 14:15:38 -05:00
BillSchumacher
c544cebbe6 Merge branch 'master' of https://github.com/BillSchumacher/Auto-GPT into plugin-support 2023-04-16 14:15:15 -05:00
Richard Beales
f746107697 Update docker-hub image push action
Change the trigger to on-release rather than on-push otherwise image will be tagged with wrong (previous) version
2023-04-16 20:11:45 +01:00
Eesa Hamza
9b6bce4592 Improve the error logging for OAI Issues 2023-04-16 22:10:48 +03:00
Slowly-Grokking
5b3afeccc1 Merge branch 'Significant-Gravitas:master' into master 2023-04-16 14:06:02 -05:00
Jakub Bober
dc80a5a2ec Add "Memory Backend Setup" subtitle
Add the subtitle to match the Table of Contents
2023-04-16 21:01:18 +02:00
Merwane Hamadi
fdb0a06803 fix conflict 2023-04-16 11:44:21 -07:00
Eesa Hamza
3944f29add Fixed new backends not being added to supported memory 2023-04-16 21:40:09 +03:00
Merwane Hamadi
45a2dea042 fixed flake8 2023-04-16 11:34:38 -07:00
Merwane Hamadi
bb541ad3a7 Update requirements.txt with new dependencies and move tweepy 2023-04-16 11:34:38 -07:00
Merwane Hamadi
dca10ab876 Add benchmark test for Entrepreneur-GPT with difficult user 2023-04-16 11:34:38 -07:00
Merwane Hamadi
75162339f5 Add empty __init__.py to benchmark directory 2023-04-16 11:34:38 -07:00
Merwane Hamadi
b2b31dbc8f Update logs.py with new print_assistant_thoughts function 2023-04-16 11:34:38 -07:00
Merwane Hamadi
63d2a1085c Add JSON validation utility function 2023-04-16 11:34:38 -07:00
Merwane Hamadi
af50d6cfb5 Add JSON schema for LLM response format version 1 2023-04-16 11:34:38 -07:00
Merwane Hamadi
cfbec56b2b Refactor parsing module and move JSON fix function to appropriate location 2023-04-16 11:34:37 -07:00
Merwane Hamadi
fec25cd690 Add master_json_fix_method module for unified JSON handling 2023-04-16 11:33:17 -07:00
Merwane Hamadi
5c67484295 Remove deprecated function from bracket_termination.py 2023-04-16 11:33:15 -07:00
Merwane Hamadi
70100af98e Refactor get_command function in app.py to accept JSON directly 2023-04-16 11:32:27 -07:00
Merwane Hamadi
bf24cd9508 Refactor agent.py to improve JSON handling and validation 2023-04-16 11:32:27 -07:00
Merwane Hamadi
d934d226ce Update .gitignore to properly handle virtual environments 2023-04-16 11:32:27 -07:00
Merwane Hamadi
005479f8c3 Add benchmark GitHub action workflow 2023-04-16 11:32:27 -07:00
Richard Beales
97d62cc16b Merge pull request #1973 from Significant-Gravitas/master
Merge for Release 0.2.1
2023-04-16 19:23:32 +01:00
Richard Beales
a91ef56954 Remove warnings if memory backend is not installed 2023-04-16 19:20:19 +01:00
BillSchumacher
5802f17726 Merge pull request #1968 from jayceslesar/fix/type-annotations
unify annotations to future syntax
2023-04-16 13:10:04 -05:00
BillSchumacher
1df47bb0be Add in one more place. 2023-04-16 13:08:16 -05:00
Jayce Slesar
713e4c1822 Merge branch 'master' into fix/type-annotations 2023-04-16 14:05:13 -04:00
jayceslesar
8990911522 unify annotations to future syntax 2023-04-16 14:02:48 -04:00
SBNovaScript
13602b4a63 Add list type check 2023-04-16 18:56:09 +01:00
SBNovaScript
f02b6832e2 Fix google result encoding. 2023-04-16 18:56:09 +01:00
Peter Svensson
5634eee2cf removed erroneous whitespace to appease lint 2023-04-16 18:44:06 +01:00
Peter Svensson
4fa97e9218 remvoed options so that @pi can merge this and another commit easily 2023-04-16 18:44:06 +01:00
bvoo
cd78f21b51 cleanup 2023-04-16 18:44:06 +01:00
Peter Svensson
e6d2de7893 removed debug flag 2023-04-16 18:44:06 +01:00
Peter Svensson
92ab3e0e8b fixes #1821 by installing required drivers and adding options to chromedriver 2023-04-16 18:44:06 +01:00
Reinier van der Leer
41a0a68782 fix(issue template): GPT-3 checkbox not required 2023-04-16 18:36:09 +01:00
Reinier van der Leer
5698689361 Update bug report template
Add GPT-3 checkbox & emphasize to search for existing issues first
2023-04-16 18:36:09 +01:00
Richard Beales
4a98745788 Merge pull request #1932 from smendig/env-template-fix
Fix .env.template documentation, incorrect default value for TEMPERATURE
2023-04-16 18:30:49 +01:00
Reinier van der Leer
11620cc571 Fix and consolidate command workspace resolution 2023-04-16 18:27:39 +01:00
Richard Beales
35175fc19b Merge pull request #1934 from Bentlybro/fix-readme
simply removing a duplicate "Milvus Setup" in the README.md
2023-04-16 18:11:42 +01:00
GyDi
c3f01d9b2f fix: config save and load path inconsistent 2023-04-16 17:28:55 +01:00
liuyachen
83930335f0 Fix README 2023-04-16 17:27:13 +01:00
Pi
ccf3c7b89e Update file_operations.py 2023-04-16 17:25:58 +01:00
Steve
5b428f509b fix file logging issue 2023-04-16 17:25:58 +01:00
Bently
4a67c687c3 simply removing a duplicate "Milvus Setup" in the README.md 2023-04-16 17:20:30 +01:00
Sabin Mendiguren
fb9430da0a Update .env.template
Small fix for the TEMPERATURE to show the real default value
2023-04-16 09:12:50 -07:00
Gabe
9c8d95d4db Fix README.md
New owner.
2023-04-16 11:05:00 -05:00
0xArty
ad7cefa10c updated contributing docs 2023-04-16 11:19:38 +01:00
cs0lar
0b936a2bb8 fixes index name to classname conversion 2023-04-16 10:48:43 +01:00
BillSchumacher
5e67722836 Merge pull request #1793 from ickma/fix-google-search-encoding
Fix google api fetch error
2023-04-16 04:10:39 -05:00
BillSchumacher
2193d64f7e Merge pull request #1826 from hdkiller/add-only-to-prompt-if-model-is-set
Only add audio to text command to the prompt if model is set
2023-04-16 04:05:29 -05:00
BillSchumacher
19d2aa5c97 Merge pull request #1825 from farizrahman4u/patch-3
Remove least relevant items from memory first
2023-04-16 04:01:39 -05:00
HDKiller
405632f187 Only add audio to text command to the prompt if model is set 2023-04-16 08:57:23 +00:00
Fariz Rahman
4f3bb609df Remove least relevant items from memory first 2023-04-16 14:23:02 +05:30
BillSchumacher
5f2f694dca Merge pull request #1817 from Significant-Gravitas/richbeales-patch-1
Create a Docker image on DockerHub on release to stable
2023-04-16 03:52:54 -05:00
Richard Beales
bc09ce93eb Create a Docker image on DockerHub on release to stable 2023-04-16 09:20:16 +01:00
BillSchumacher
6a0a3811d9 Merge pull request #1814 from zzzgydi/fix-prompt
fix: add necessary space to the prompt
2023-04-16 02:59:14 -05:00
GyDi
bf98791330 fix: add necessary space to the prompt 2023-04-16 15:50:26 +08:00
Toran Bruce Richards
1cc3a00eb2 Updates Sponsors 2023-04-16 19:18:08 +12:00
BillSchumacher
ad4c3e055b Merge pull request #424 from cs0lar/feature/weaviate-memory
Feature/weaviate memory
2023-04-16 02:10:00 -05:00
BillSchumacher
b865e2c2f8 Fix README 2023-04-16 02:08:38 -05:00
Slowly-Grokking
4173e184bd Merge branch 'master' into master 2023-04-16 02:05:47 -05:00
BillSchumacher
37a1dc1e34 Merge branch 'master' into feature/weaviate-memory 2023-04-16 02:05:36 -05:00
Slowly-Grokking
9389509017 Update README.md 2023-04-16 02:01:42 -05:00
BillSchumacher
4cd412c39f Update requirements.txt 2023-04-16 01:55:34 -05:00
Toran Bruce Richards
34bedec044 Updates sponsors list 2023-04-16 18:54:56 +12:00
cs0lar
23b89b80cd merged master and resolved conflicts 2023-04-16 07:49:21 +01:00
Slowly-Grokking
a7c52579f8 Merge branch 'master' into master 2023-04-16 01:49:06 -05:00
BillSchumacher
6aa76ec794 Merge pull request #1796 from Significant-Gravitas/richbeales-patch-1
Create Docker Image CI
2023-04-16 01:39:49 -05:00
BillSchumacher
cfdd7c1206 Merge pull request #1787 from chyezh/fix_readme
README fix, update Table Of Contents, fix and better memory backend s…
2023-04-16 01:38:38 -05:00
BillSchumacher
b23b832332 Merge pull request #1436 from CalCameron/master
File Logger that tracks changes to file operations to prevent looping
2023-04-16 01:33:25 -05:00
BillSchumacher
0d1fd4fcf0 Merge branch 'master' of https://github.com/Significant-Gravitas/Auto-GPT into pr/1436 2023-04-16 01:32:18 -05:00
Richard Beales
f048f88337 Merge pull request #1700 from OmriGM/tests/chat-tests
Added tests for `create_chat_message` and `generate_context` methods of the chat module
2023-04-16 07:16:04 +01:00
chyezh
e61c48ea85 README fix, update Table Of Contents, fix and better memory backend setup guide 2023-04-16 14:14:57 +08:00
Richard Beales
c5f513ae2d Merge pull request #1732 from alexonab/patch-5
Fix  update bracket_termination.py with f string
2023-04-16 07:02:59 +01:00
Richard Beales
2ea9a8bb1b Merge pull request #1719 from Ding3LI/master
Add missing clarifications and method usages
2023-04-16 07:01:20 +01:00
Richard Beales
2038dff027 Merge pull request #1798 from yueliu1999/develop
Catch exception of repository clone
2023-04-16 06:59:20 +01:00
Richard Beales
236f30043c Merge pull request #1730 from Zorinik/master
Typo in prompt start - missing space resulted in joined words in the prompt
2023-04-16 06:56:38 +01:00
Richard Beales
c6c08eb0e7 Merge pull request #1794 from Explorergt92/patch-3
Update README.md .env.template instructions
2023-04-16 06:49:11 +01:00
Richard Beales
7872c4ecf4 clarify/tweak instructions and wording 2023-04-16 06:48:03 +01:00
Yue Liu
ca47a58a5d Catch exception of repository clone 2023-04-16 13:47:13 +08:00
BillSchumacher
05bafb9838 Fix fstring bug. 2023-04-16 00:40:00 -05:00
BillSchumacher
abb54df4d0 Add custom commands to execute_command via promptgenerator 2023-04-16 00:37:21 -05:00
Richard Beales
17f3df0a04 Create Docker Image CI
Github action to build the docker image on CI
2023-04-16 06:31:17 +01:00
John
4374e4a43f Update to README.md
fixed syntax
2023-04-16 01:31:00 -04:00
BillSchumacher
83403ad3ab add pre_command and post_command hooks. 2023-04-16 00:20:00 -05:00
BillSchumacher
17478d6a05 Add post planning hook 2023-04-16 00:09:11 -05:00
John
b64a4881d9 Update README.md clarifying the .env.template step 2023-04-16 01:05:51 -04:00
BillSchumacher
397627d1b9 add post_instruction hook 2023-04-16 00:01:23 -05:00
BillSchumacher
00225e01b3 Fix another bad implementation detail. 2023-04-15 23:54:20 -05:00
BillSchumacher
fc7db7d86f Fix bad logic probably. 2023-04-15 23:51:43 -05:00
Richard Beales
765166e807 Merge pull request #1763 from danmohad/update-docker-requirements
Added packages necessary to run docker from terminal
2023-04-16 05:48:46 +01:00
Richard Beales
63501c2ff4 Merge branch 'master' into update-docker-requirements 2023-04-16 05:48:17 +01:00
chao ma
2576b299e7 Fix google api fetch error 2023-04-16 12:45:49 +08:00
BillSchumacher
ee42b4d06c Add pre_instruction and on_instruction hooks. 2023-04-15 23:45:16 -05:00
Richard Beales
ee8aa5074f Merge pull request #1779 from Qinbf/master
add tweepy module
2023-04-16 05:43:55 +01:00
Richard Beales
6703beea22 Merge pull request #1791 from kumayu0108/my_branch
resolved tweepy not in requirements.txt
2023-04-16 05:36:58 +01:00
Ayush Kumar
a2000b4b9d resolved tweepy not in requirements.txt 2023-04-16 09:53:51 +05:30
BillSchumacher
09a5b3149d Add on_planning hook. 2023-04-15 23:01:01 -05:00
BillSchumacher
68e26bf9d6 Refactor main startup to store AIConfig on Agent for plugin usage. 2023-04-15 22:43:17 -05:00
BillSchumacher
e36b74893f Add name and role to prompt generator object for maximum customization. 2023-04-15 22:33:56 -05:00
BillSchumacher
2761a5c361 Add post_prompt hook 2023-04-15 22:18:55 -05:00
覃秉丰
d5ae51aab0 add tweepy module 2023-04-16 11:18:48 +08:00
BillSchumacher
b7a29e71cd Refactor prompts into package, make the prompt able to be stored with the AI config and changed. Fix settings file. 2023-04-15 22:15:34 -05:00
BillSchumacher
1af463b03c Merge branch 'master' of https://github.com/Significant-Gravitas/Auto-GPT into plugin-support 2023-04-15 21:37:27 -05:00
Slowly-Grokking
66ee7e1a81 Update README.md 2023-04-15 21:33:26 -05:00
Slowly-Grokking
16553be539 Merge branch 'master' into master 2023-04-15 21:29:14 -05:00
BillSchumacher
4daa083fd3 Merge pull request #1767 from Bentlybro/ElevenLabsIDs
ElevenLabs Voice ID's
2023-04-15 21:25:46 -05:00
BillSchumacher
bf3142ad67 Add eleven labs voice IDs. 2023-04-15 21:24:40 -05:00
BillSchumacher
32e09665ad Revert "Show README.md Love"
This reverts commit 7abc03e523.
2023-04-15 21:23:12 -05:00
BillSchumacher
74a0944862 Revert "Show readme love"
This reverts commit 4a38fbaa99.
2023-04-15 21:23:09 -05:00
BillSchumacher
a52be46e69 Revert "Show readme love"
This reverts commit 6f7153324c.
2023-04-15 21:23:05 -05:00
BillSchumacher
7b4e2bdb4d Revert "Show readme love"
This reverts commit fd1a11c452.
2023-04-15 21:23:01 -05:00
BillSchumacher
d141383305 Revert "All ElevenLabs voice ID's"
This reverts commit b225bc24dc.
2023-04-15 21:22:53 -05:00
Bently
b225bc24dc All ElevenLabs voice ID's 2023-04-16 02:56:59 +01:00
Danyal Mohaddes
bdb93631d6 added packages necessary to run docker from terminal 2023-04-15 21:23:32 -04:00
Bently
fd1a11c452 Show readme love 2023-04-16 02:09:35 +01:00
Bently
6f7153324c Show readme love 2023-04-16 02:09:00 +01:00
Bently
4a38fbaa99 Show readme love 2023-04-16 02:08:14 +01:00
Bently
7abc03e523 Show README.md Love 2023-04-16 01:46:46 +01:00
Bently
8a89b6be12 Merge branch 'master' of https://github.com/Bentlybro/Auto-GPT 2023-04-16 01:42:16 +01:00
BillSchumacher
4870356899 Merge pull request #1718 from gucky92/transcribe_audio_huggingface
Transcribe audio using huggingface
2023-04-15 19:33:20 -05:00
BillSchumacher
017371b492 Merge branch 'master' of https://github.com/Significant-Gravitas/Auto-GPT into pr/1718 2023-04-15 19:32:05 -05:00
BillSchumacher
a3f25ca5af Merge pull request #1263 from warmthsea/master
Update consistent code command style
2023-04-15 19:29:12 -05:00
BillSchumacher
cc51abd4dd Delete activity.log
remove accidentally added file.
2023-04-15 19:28:15 -05:00
BillSchumacher
1908af52df Delete error.log
remove accidentally added file.
2023-04-15 19:28:07 -05:00
BillSchumacher
a99beb0628 Merge branch 'master' of https://github.com/Significant-Gravitas/Auto-GPT into pr/1263 2023-04-15 19:27:25 -05:00
BillSchumacher
fef9a1e42f Merge 2023-04-15 19:24:12 -05:00
Bently
6a98ebdb9c Fixes README.md
Fix README.md for now
2023-04-16 00:40:08 +01:00
lonrun
08eb2566e4 Add run scripts for shell 2023-04-16 07:37:50 +08:00
roby.parapat
8cbe438ad5 move comment to correct position 2023-04-16 06:33:43 +07:00
Slowly-Grokking
6e9cc463b3 Merge branch 'Significant-Gravitas:master' into master 2023-04-15 18:09:58 -05:00
Pi
60881ed856 Add \n to pass linter-check 2023-04-15 23:48:27 +01:00
Pi
d5534f1e5f Add missing terminal \n 2023-04-15 23:48:27 +01:00
DaoAdvocate
424564825a .env 2023-04-15 23:48:27 +01:00
DaoAdvocate
c30a621195 updates 2023-04-15 23:48:27 +01:00
DaoAdvocate
cfba3d0a60 twitter_send_tweets_command 2023-04-15 23:48:27 +01:00
Zorinik
f817daba17 Merge branch 'Significant-Gravitas:master' into master 2023-04-16 00:33:34 +02:00
Domenico Giambra
04189de9c5 Prints to console what the assistant wants to say 2023-04-16 00:32:26 +02:00
BillSchumacher
f785c8cf03 Merge pull request #96 from ryanmac/playwright-browser
Use playwright instead of requests for browse
2023-04-15 17:31:45 -05:00
Mike M
9e4cc5cc78 Fix update bracket_termination.py with f string 2023-04-15 17:31:36 -05:00
BillSchumacher
ef4e4eb5d4 Blacked 2023-04-15 17:30:28 -05:00
Domenico Giambra
f57e3cfecb Typo in prompt start - missing space resulted in joined words in the prompt 2023-04-16 00:27:16 +02:00
Itamar Friedman
5a8700060e fixing tests to fit latest merges into master 2023-04-15 23:19:50 +01:00
BillSchumacher
f2035231e3 Refactor and Merge branch 'master' of https://github.com/Significant-Gravitas/Auto-GPT into pr/96 2023-04-15 17:12:59 -05:00
Matthias Christenson
fd824143e9 Merge branch 'Significant-Gravitas:master' into transcribe_audio_huggingface 2023-04-15 23:55:16 +02:00
gucky92
572aedfcef Merge branch 'transcribe_audio_huggingface' of https://github.com/gucky92/Auto-GPT into transcribe_audio_huggingface 2023-04-15 23:53:03 +02:00
gucky92
973e3c56b7 change 'image' to 'file' 2023-04-15 23:53:00 +02:00
BillSchumacher
1586966003 Merge pull request #124 from EricFedrowisch/master
First draft at adding persistent memory via sqlite3
2023-04-15 16:42:14 -05:00
BillSchumacher
4a19124cb7 Blacked. 2023-04-15 16:40:12 -05:00
BillSchumacher
f86ca43b2f Merge branch 'master' of https://github.com/Significant-Gravitas/Auto-GPT into pr/124
Moved code to new package to integrate later perhaps.
2023-04-15 16:38:58 -05:00
Ding3LI
a6432e6ce4 [template] env template: added clarification, optional usages 2023-04-15 16:26:42 -05:00
Matthias Christenson
18168cc347 Merge branch 'Significant-Gravitas:master' into transcribe_audio_huggingface 2023-04-15 23:22:37 +02:00
BillSchumacher
2fb1b70a14 Merge pull request #87 from mharris717/improve-extract_hyperlinks-040323
Improve extract_hyperlinks to honor base url
2023-04-15 16:21:59 -05:00
BillSchumacher
52bb22d8d1 Merge 2023-04-15 16:20:43 -05:00
gucky92
3239d6879b Merge branch 'master' of https://github.com/gucky92/Auto-GPT 2023-04-15 23:19:21 +02:00
gucky92
9696fc622c Transcribing audio 2023-04-15 23:19:20 +02:00
BillSchumacher
9cf7227a67 Merge branch 'master' of https://github.com/Significant-Gravitas/Auto-GPT into pr/87 2023-04-15 16:17:57 -05:00
Omri Grossman
5495d6c0d3 Removed redundant test 2023-04-16 00:09:33 +03:00
cs0lar
03d2032a6a merged master and resolved conflicts 2023-04-15 22:08:38 +01:00
Richard Beales
dfe5550ad0 Merge pull request #1712 from BillSchumacher/master
Merge #72
2023-04-15 22:01:32 +01:00
BillSchumacher
09f13033ae Merge branch 'escape-double-quotes-in-json-values' of github.com:PhilipAD/Auto-GPT 2023-04-15 15:50:50 -05:00
Richard Beales
bd525ab9e2 Merge pull request #1702 from manskx/master
Update README.md "ELEVENLABS_API_KEY"
2023-04-15 21:43:52 +01:00
Slowly-Grokking
d626a0637d Merge branch 'master' into master 2023-04-15 15:42:23 -05:00
Omri Grossman
167d1be130 Moved test_chat to /unit 2023-04-15 23:40:03 +03:00
Slowly-Grokking
92c0106e81 Update README.md 2023-04-15 15:33:47 -05:00
Omri Grossman
accec9ab75 Lint fixes - extra whitespace 2023-04-15 23:32:22 +03:00
BillSchumacher
862adb2b64 Merge pull request #1518 from EdgarBarrantes/patch-1
Update docs: Data ingestion script location
2023-04-15 15:31:23 -05:00
BillSchumacher
6d369671c8 Merge branch 'master' into patch-1 2023-04-15 15:30:18 -05:00
Mansy
9607ae0c1e Update README.md
Use correct var name "ELEVENLABS_API_KEY"
2023-04-15 22:26:00 +02:00
Omri Grossman
e3751f0e36 Removed comments 2023-04-15 23:23:37 +03:00
BillSchumacher
ff5b8f1490 Merge pull request #1683 from Hyaxia/feature/rate-limit-log
in debug mode add a log about rate limit error
2023-04-15 15:18:28 -05:00
BillSchumacher
8978844111 Update llm_utils.py
Remove pass
2023-04-15 15:17:23 -05:00
Omri Grossman
8293d96f24 Added tests for create_chat_message and generate_context methods of the chat module 2023-04-15 23:15:28 +03:00
BillSchumacher
106bf2c52e Merge pull request #1690 from Bedrock-Utilities/master
Update requirements.txt
2023-04-15 15:15:03 -05:00
Richard Beales
43a5a9e653 Merge pull request #1682 from thisislvca/master
Update README.md
2023-04-15 21:11:01 +01:00
BillSchumacher
898b7eed8a Merge pull request #1635 from Imccccc/feature/embedding-with-retry
Embedding Improvement
2023-04-15 15:04:31 -05:00
BillSchumacher
e758a4de3e Update pinecone.py
Fix blank lines.
2023-04-15 15:03:33 -05:00
BillSchumacher
93b3e8428c Update llm_utils.py
Fix trailing whitespace
2023-04-15 15:03:03 -05:00
nponeccop
3f535e3b56 Merge pull request #1694 from BillSchumacher/apply-quality
Quality update
2023-04-15 21:58:47 +02:00
BillSchumacher
11d6dabe37 Quality update 2023-04-15 14:55:13 -05:00
cs0lar
51224229eb fixed merge conflicts 2023-04-15 20:32:31 +01:00
DJ Stomp
bebc015eb3 Update requirements.txt 2023-04-15 12:30:09 -07:00
hyaxia
2f776957d8 changed error msg 2023-04-15 22:20:05 +03:00
Luca Meneghetti
27a21e848d Update README.md
Fixed a typo in the README.md file:

From "may often in a **broken** state." to "may often be in a **broken** state."
2023-04-15 21:07:27 +02:00
hyaxia
051b5372ce in debug mode add a log about rate limit error 2023-04-15 22:06:27 +03:00
Richard Beales
82f53aae54 Merge pull request #1678 from Significant-Gravitas/nponeccop-patch-1
Fix run.bat to use the new module
2023-04-15 20:00:57 +01:00
Slowly-Grokking
8bcab8796e Merge branch 'Significant-Gravitas:master' into master 2023-04-15 14:00:15 -05:00
Richard Beales
097fcd8f56 Merge pull request #1659 from CatMe0w/patch-1
Fix typo in .env.template
2023-04-15 19:59:56 +01:00
Slowly-Grokking
f5c600a9f8 relocate data_ingestion.py
making this work without code change

update readme
2023-04-15 13:59:42 -05:00
nponeccop
77f44cdbbe Fix run.bat to use the new module 2023-04-15 20:59:38 +02:00
Richard Beales
1c12a84ded Merge pull request #1658 from wangxuqi/milvus_memory
Fix Milvus as a long-term memory backend.
2023-04-15 19:53:02 +01:00
Pi
a1d201028b Added missing \n at end of file 2023-04-15 19:51:52 +01:00
Richard Beales
33e8d61959 Merge pull request #1676 from Significant-Gravitas/p-i--patch-1
Update requirements.txt
2023-04-15 19:48:34 +01:00
Pi
35192cf413 Update requirements.txt 2023-04-15 19:47:04 +01:00
Richard Beales
c8c4e2b59c Fix invalid config import in git_operations 2023-04-15 19:28:48 +01:00
Richard Beales
c1f18b5324 Revert "Add ability to use local embeddings model" (#1662) 2023-04-15 19:25:44 +01:00
Ding3LI
9f822ec5ca [doc] Improvements: Tutorials and Explanation (#1603)
* [doc] Modified README: detailed explanation, cleared conceptual confusions, added explicit examples

* [doc] Modified README: emphasize precedence note, concise description

* [doc] Modified README: fixed CMD to project directory
2023-04-15 19:21:39 +01:00
CatMe0w
11f3e97b28 Fix typo in .env.template 2023-04-16 01:58:40 +08:00
cs0lar
899c815676 fixed auth code 2023-04-15 18:55:45 +01:00
cs0lar
8916b76f11 fixed change request 2023-04-15 18:52:59 +01:00
Richard Beales
1ce6419698 Merge pull request #1637 from yueliu1999/master
Error notation in the split_file function in file_operations.py
2023-04-15 18:47:51 +01:00
Richard Beales
e0590e08d7 Merge pull request #1471 from gersh/fix_agents
Fix list_agents to return string instead of JSON object
2023-04-15 18:46:50 +01:00
xuqi.wxq
5e189c83ee Fix Milvus as a long-term memory backend. 2023-04-16 01:45:38 +08:00
Richard Beales
fdac81e908 Merge pull request #1489 from hdkiller/remove-please-from-prompts
Remove please from prompts
2023-04-15 18:45:05 +01:00
Richard Beales
3e6a3c42c2 Merge pull request #870 from DenTheProgrammer/master
Easy run with bat file (with requirements check and install if needed)
2023-04-15 18:44:25 +01:00
Richard Beales
f6a8da0b07 Merge pull request #1507 from jacobtohahn/fix_agent_key_error
Fix agent key error
2023-04-15 18:41:50 +01:00
Richard Beales
b171774051 Merge pull request #1625 from alexonab/patch-4
Fix for 'requires string as left operand, not PosixPath'
2023-04-15 18:40:13 +01:00
Richard Beales
36019cb5ab Merge pull request #1618 from younessZMZ/master
Adjust test_config file
2023-04-15 18:39:31 +01:00
Richard Beales
51fc59b45f Merge pull request #1610 from adityaoke/adityaoke/fix_json_str
[1607] Sourcery is detecting linting issues in autogpt/json_fixes/aut…
2023-04-15 18:33:34 +01:00
Richard Beales
af46c0471e Merge pull request #1609 from younessZMZ/branch1
Adjust test_prompt_generator and add test report generation
2023-04-15 18:32:27 +01:00
Richard Beales
63936209a0 Create a list of synonyms for commands when the AI hallucinates (#1526) 2023-04-15 18:25:45 +01:00
Richard Beales
5f4e317321 Only add execute shell scripts to prompt if AI is allowed to do it. (#1551) 2023-04-15 18:24:57 +01:00
Richard Beales
33b7866377 Merge pull request #1229 from edcohen08/clone-github-repository
command clone github repository
2023-04-15 18:19:47 +01:00
Richard Beales
17cdeee214 Merge pull request #1320 from Tymec/master
Add ability to use local embeddings model
2023-04-15 18:13:16 +01:00
Gershon Bialer
6a3fcda751 Merge remote-tracking branch 'origin/master' into fix_agents 2023-04-15 10:04:15 -07:00
Richard Beales
9a09a35502 Merge pull request #1623 from eltociear/patch-3
Update README.md
2023-04-15 17:56:27 +01:00
HDKiller
885d81b354 remove "please" from prompt in text.py 2023-04-15 16:55:32 +00:00
yueliu1999
58eb0b37b4 Update file_operations.py
Error notation. In the split_file function, line 43, text->content.
2023-04-16 00:52:54 +08:00
Denis Mozhayskiy
5c342bd974 spelling 2023-04-15 19:48:10 +03:00
Imccccc
f67b81e200 Embedding Improvement
1. move embedding function into llm_utils
2. add try feature with in embedding function
2023-04-16 00:47:41 +08:00
Jacob Hahn
26b3126c34 Merge branch 'master' into fix_agent_key_error 2023-04-15 12:36:18 -04:00
Mike M
712982a7d5 Fix for 'requires string as left operand, not PosixPath' 2023-04-15 11:21:43 -05:00
Ikko Eltociear Ashimine
919a784a20 Update README.md
HuggingFace -> Hugging Face
2023-04-16 01:19:46 +09:00
younessZMZ
110e2f3ae5 Adjust test_config file 2023-04-15 15:58:01 +00:00
Richard Beales
793ea4d893 Merge pull request #1590 from twajothi/master
Fixed the import
2023-04-15 16:42:14 +01:00
Richard Beales
de575eba60 Merge pull request #1586 from nicostubi/feature/gitignore-additions
Some more file extensions to ignore
2023-04-15 16:39:36 +01:00
Thibault Twahirwa
fe1241aa61 Merge branch 'Significant-Gravitas:master' into master 2023-04-15 11:35:50 -04:00
Richard Beales
d7f8748572 Merge pull request #1580 from youkaichao/fix_type
update type annotation
2023-04-15 16:35:06 +01:00
Richard Beales
5bc7304675 Merge pull request #1579 from bufo24/master
remove sourcery from docker build
2023-04-15 16:33:51 +01:00
Thibault Twahirwa
3564fdaec6 Merge branch 'Significant-Gravitas:master' into master 2023-04-15 11:33:04 -04:00
Richard Beales
7c0789252e Merge pull request #1572 from cryptidv/browser-agnostic
Make browsing with Selenium Browser Agnostic
2023-04-15 16:32:24 +01:00
Richard Beales
26ffa41f20 Merge pull request #1567 from drikusroor/cover-token-counter
test: Write unit tests for token_counter
2023-04-15 16:30:37 +01:00
HDKiller
fbe1b0e5b0 remove summary.py from this branch to avoid merge conflict 2023-04-15 17:30:18 +02:00
Richard Beales
a84fc06483 Merge pull request #1558 from pitmonticone/master
Fix typo
2023-04-15 16:29:54 +01:00
Richard Beales
d6124b77cc Merge pull request #1528 from shaiss/patch-1
typo in TOC
2023-04-15 16:28:23 +01:00
Richard Beales
f957331310 Merge pull request #1525 from droosma/patch-1
Update docker-compose.yml
2023-04-15 16:27:40 +01:00
Tymec
e0af761c35 chore: flake8 formatting 2023-04-15 17:26:58 +02:00
Thibault Twahirwa
d28ac11d56 Merge branch 'Significant-Gravitas:master' into master 2023-04-15 11:23:55 -04:00
Richard Beales
3ee961c600 Merge pull request #1417 from merwanehamadi/feature/change-default-temperature
make 0 the default temperature
2023-04-15 16:22:45 +01:00
Richard Beales
4354065f78 Merge pull request #1381 from jedak1ah/master
Fixed error when google results might have weird characters
2023-04-15 16:20:49 +01:00
Aditya Oke
df4f160846 [1607] Sourcery is detecting linting issues in autogpt/json_fixes/auto_fix.py 2023-04-15 08:18:59 -07:00
Richard Beales
5e18bb4b61 Merge pull request #1304 from JuroOravec/master
README: Explain OpenAI billing for API key (Fixes issue about "API Rate Limit Reached. Waiting 20 seconds ... Failed to get response after 5 retries")
2023-04-15 16:16:24 +01:00
Eddie Cohen
71abd6f2e4 linting 2023-04-15 11:15:18 -04:00
Richard Beales
8c4b985df0 Merge pull request #1269 from suensummit/cleanup-unused-azure-env
Cleanup azure parameters in env.template and remove unused env in config.py
2023-04-15 16:15:16 +01:00
Richard Beales
95f7ed607a Merge pull request #1202 from merwanehamadi/feature/setup-integration-tests
Feature/setup integration tests
2023-04-15 16:13:01 +01:00
younessZMZ
0c1ff5d6a4 Adjust test_prompt_generator and add test report generation 2023-04-15 15:10:42 +00:00
Thibault Twahirwa
8ec2538584 Merge branch 'Significant-Gravitas:master' into master 2023-04-15 11:04:09 -04:00
cs0lar
2678a5a74b fixed merge conflicts 2023-04-15 16:01:47 +01:00
Jedakiah
02db53e12f Fixed error when google search contains funny characters 2023-04-15 17:01:29 +02:00
Eddie Cohen
99c4f93ee3 fix rebasing 2023-04-15 10:59:40 -04:00
Jedakiah
0a137e4e63 Merge remote-tracking branch 'origin/master'
# Conflicts:
#	autogpt/commands.py
2023-04-15 16:57:40 +02:00
Eddie Cohen
11faf42c7e move git operations 2023-04-15 10:57:03 -04:00
Eddie Cohen
5f5eac61e3 clone repo method 2023-04-15 10:57:03 -04:00
Eddie Cohen
4f9d5b9e32 commands, git on docker 2023-04-15 10:57:03 -04:00
Eddie Cohen
0569d6652f add command 2023-04-15 10:57:01 -04:00
Eddie Cohen
4d8de551b5 add prompt, env example, config 2023-04-15 10:56:10 -04:00
Eddie Cohen
eb8b3e6622 add gitpython 2023-04-15 10:56:09 -04:00
Richard Beales
5dfdb2e2a9 Merge pull request #801 from chyezh/enable-milvus
enable milvus as memory backend
2023-04-15 15:51:21 +01:00
cs0lar
b2bfd395ed fixed formatting 2023-04-15 15:49:24 +01:00
Summit Suen
a0de3868c6 Cleanup azure parameters in env.template and remove unused env in config.load_azure_config(). 2023-04-15 22:44:50 +08:00
Nicolas Stübi
84aed05ebb Merge branch 'master' into feature/gitignore-additions
# Conflicts:
#	.gitignore
2023-04-15 16:25:31 +02:00
Nicolas Stübi
f9265e9b01 Some more file extensions to ignore 2023-04-15 16:20:26 +02:00
Thibault Twahirwa
44bd3d6717 Fixed the import 2023-04-15 10:17:03 -04:00
youkaichao
a51b37f01c fix dict type annotation 2023-04-15 22:10:22 +08:00
youkaichao
afd2c5e2c6 update type annotation 2023-04-15 22:04:05 +08:00
Tymec
53297e55bf Merge remote-tracking branch 'upstream/master' 2023-04-15 16:03:08 +02:00
Bufo
55facfd8db remove sourcery from docker build 2023-04-15 15:59:14 +02:00
cs0lar
005be024f1 fixed typo 2023-04-15 14:45:16 +01:00
cs0lar
b987cff7da Merge branch 'master' into feature/weaviate-memory 2023-04-15 14:43:01 +01:00
Eesa Hamza
e90e618c5e Added agnostic browser support 2023-04-15 16:28:34 +03:00
chyezh
395d9d0481 enable milvus as memory backend 2023-04-15 21:20:30 +08:00
Drikus Roor
d52381eead fix: Fix imports 2023-04-15 15:20:19 +02:00
Drikus Roor
bdefa24ac6 test: Write unit tests for token_counter 2023-04-15 15:11:25 +02:00
BillSchumacher
1073954fb7 Reorg (#1537)
* Pi's message.

* Fix most everything.

* Blacked

* Add Typing, Docstrings everywhere, organize the code a bit.

* Black

* fix import

* Update message, dedupe.

* Increase backoff time.

* bump up retries
2023-04-15 13:56:23 +01:00
Pietro Monticone
cc7d421c77 Update README.md 2023-04-15 14:07:39 +02:00
Pietro Monticone
c69050fc84 Fix typo 2023-04-15 14:05:23 +02:00
Shai Perednik
a40f335464 typo in TOC
the new memory seeiding url was wrong
  - [🧠 Memory pre-seeding](#-memory-pre-seeding)
2023-04-15 05:58:04 -04:00
Duncan Roosma
d53bd020ea Update docker-compose.yml
Needed to add this to get `docker-compose run auto-gpt` to run successfully
2023-04-15 11:42:12 +02:00
Edgar Barrantes
a791d7a244 Update docs: Data ingestion script location 2023-04-15 12:06:40 +03:00
Jacob Hahn
870c5948d2 Merge branch 'master' into fix_agent_key_error 2023-04-15 04:34:50 -04:00
Jacob Hahn
b2d6987cac Fixed condition where key could be string 2023-04-15 04:17:00 -04:00
BillSchumacher
e986af5de0 Merge pull request #1476 from shaped1/patch-1
Fix all 65 typos of it being gtp instead of GPT
2023-04-15 02:30:17 -05:00
HDKiller
bee1bc8c06 remove "please" from prompt in browser.py 2023-04-15 06:28:14 +00:00
HDKiller
72da564db5 remove "please" from prompt for summarizing text 2023-04-15 06:27:08 +00:00
cs0lar
4c7deef9ae merged master and resolved conflicts 2023-04-15 06:51:04 +01:00
Gershon Bialer
9990b78702 Fix linter issues. 2023-04-14 22:44:42 -07:00
Tymec
4049708aa5 Merge remote-tracking branch 'upstream/master' 2023-04-15 07:18:45 +02:00
Richard Beales
60b2b61b52 Merge pull request #1478 from Torantulino/master
Pulling into stable for version 0.2.0
2023-04-15 06:16:23 +01:00
polygon
82bf1c6367 Fix all 65 typos of it being gtp instead of GPT
In this file alone, Entrepreneur-GPT was referred to as Entrepreneur-GTP 65 times. Curious why, as it doesn't seem like a one time mistake/typo. Was referred to as Entrepreneur-GPT in the rest of the project, so FTFY.
2023-04-14 22:07:45 -07:00
Gershon Bialer
2644bc86db list_agents should return string not JSON 2023-04-14 21:42:54 -07:00
batyu
6e05db972a Allow local Development without pip install using "pip install -e ." 2023-04-15 06:41:53 +02:00
chao ma
773324dcd6 feat: Add support for running Chrome in Headless mode.
Add headless mode support for Chrome and refactor web page text extraction
2023-04-15 12:34:28 +08:00
Gershon Bialer
0b4f0f5622 Add unit test for testing adding a gent 2023-04-14 21:34:28 -07:00
Gershon Bialer
bac898f993 Fix list_agents to not call it self. 2023-04-14 20:59:58 -07:00
BillSchumacher
6a93537c42 AI name hotfix. (#1452)
* Pi's message.

* Fix most everything.

* Blacked

* Update agent.py

Hotfix.
2023-04-15 02:29:25 +01:00
Merwane Hamadi
dc4094b264 added smoke test 2023-04-14 17:20:02 -07:00
BillSchumacher
4bb7a598a5 Fix everything (#1444)
* Pi's message.

* Fix most everything.

* Blacked
2023-04-15 01:04:48 +01:00
Tymec
6c9ec32195 test: fixed imports 2023-04-15 01:34:51 +02:00
Tymec
753394228a Merge remote-tracking branch 'upstream/master' 2023-04-15 01:30:20 +02:00
CalCameron
16c0dc9267 Filer Logger that tracks changes to file operations to prevent looping 2023-04-14 17:47:16 -05:00
Merwane Hamadi
ca5a52f48a Update sys.path to use pathlib in json_tests.py 2023-04-14 15:46:02 -07:00
Merwane Hamadi
36091853e0 Add integration test for write_file command 2023-04-14 15:45:45 -07:00
Merwane Hamadi
7e21123a5d Add README.md with instructions on running tests 2023-04-14 15:45:45 -07:00
Jedakiah
c60e654a9b Merge remote-tracking branch 'origin/master'
# Conflicts:
#	autogpt/commands.py
2023-04-15 00:43:06 +02:00
merwanehamadi
b65b7acace added selenium dependencies (#1432) 2023-04-14 23:33:28 +01:00
Richard Beales
d57d3ea83e Merge pull request #1426 from merwanehamadi/feature/fix-commands.py
fix commands.py
2023-04-14 23:25:33 +01:00
Merwane Hamadi
b144464674 fix commands.py 2023-04-14 15:18:36 -07:00
BillSchumacher
a8cf64736f Pi's message. (#1418) 2023-04-14 23:02:11 +01:00
Tymec
062176d3f5 test: replaced MockConfig with real config
get_embedding function uses config
2023-04-14 22:49:13 +02:00
Tymec
091457a24f fix: removed keyword default from dict.get arguments 2023-04-14 22:48:25 +02:00
Richard Beales
1b3f82e729 Merge pull request #1393 from 0xArty/feature/pre-commit-formatter
Feature/pre commit formatter
2023-04-14 21:46:14 +01:00
Void&Null
55eef983d4 Implemented Selenium based web browsing. (#1397)
* Implemented Selenium based web browing.

Replaced the default web browsing function with one that uses selenium to gather information with a visual and an overlay.

Included a small bug fix for the missing google api key that would attempt to use official google with default api keys from the template.

* Fixed flake8 issues.
2023-04-14 21:35:19 +01:00
0xArty
19a011cf03 Merge branch 'master' into feature/pre-commit-formatter 2023-04-14 20:56:11 +01:00
0xArty
3ec5f1209b added sourcery back 2023-04-14 20:43:18 +01:00
0xArty
328ba5e69e formatting 2023-04-14 20:42:28 +01:00
0xArty
a0e3c238a4 migrating to black formatting 2023-04-14 20:41:45 +01:00
Merwane Hamadi
359c3bc067 make 0 the default temperature 2023-04-14 12:27:04 -07:00
0xArty
6ca6a8aa60 added more tools 2023-04-14 20:17:37 +01:00
Richard Beales
5389b2deb1 Merge pull request #1380 from merwanehamadi/autogpt-namespace-fix-imports
Autogpt namespace fix imports
2023-04-14 20:07:18 +01:00
0xArty
087642f793 added basic project info 2023-04-14 20:00:05 +01:00
0xArty
8da77020b9 added pyproject.toml and .flake8 2023-04-14 19:58:04 +01:00
Merwane Hamadi
9b56ebe5c4 removed json_tests.py test_that_apologies_containing_multiple_json_get_the_correct_one because it breaks 2023-04-14 11:55:18 -07:00
0xArty
4322784b01 added black to the requirments 2023-04-14 19:43:34 +01:00
0xArty
1804a804df updated the contributor guide 2023-04-14 19:36:19 +01:00
0xArty
9d0bc54b07 added pre-commit formatting 2023-04-14 19:29:21 +01:00
Jedakiah
7daa3fc8f9 Merge remote-tracking branch 'origin/master' 2023-04-14 20:18:04 +02:00
Merwane Hamadi
8dbc71da0c added message to redirect users 2023-04-14 11:15:17 -07:00
Merwane Hamadi
adf7c3ac98 added more autogpt prefixes in imports 2023-04-14 11:15:02 -07:00
Jedakiah
d0dd107f39 Fixed error when google results might have weird characters 2023-04-14 20:00:36 +02:00
Dino Hensen
d64f866bfa Convert to python module named autogpt.
Also fixed the Dockerfile.
Converting to module makes development easier.
Fixes coverage script in CI and test imports.
2023-04-14 10:27:41 -07:00
Richard Beales
638c956f72 Merge pull request #1365 from Torantulino/master
Merge into Stable for PR batch 4 v0.1.3
2023-04-14 18:18:06 +01:00
Richard Beales
a17a850b25 Merge pull request #968 from maiko/add_website_memory
Add visited website to memory for recalling content without being limited by the website summary.
2023-04-14 17:55:32 +01:00
Maiko Bossuyt
5a6053594f Merge branch 'Torantulino:master' into add_website_memory 2023-04-14 18:36:26 +02:00
Richard Beales
40ed086f81 Merge pull request #1347 from mikekelly/code-execution-when-already-in-a-container
Execute python via shell if already running in a container
2023-04-14 17:36:02 +01:00
Maiko Bossuyt
483abb1da1 Merge branch 'master' into add_website_memory 2023-04-14 18:34:53 +02:00
Richard Beales
f6f4f1fbc0 Merge pull request #992 from maiko/add_ingest_documents_script
Add data_ingestion.py script for memory pre-seeding
2023-04-14 17:32:35 +01:00
Richard Beales
100fd8d0b9 Merge pull request #1096 from cryptidv/flags-updates
Flags Updates
2023-04-14 17:22:10 +01:00
Mike Kelly
2ba0cb24dc execute python via shell if already running in a container 2023-04-14 17:18:07 +01:00
Maiko Bossuyt
8093ac7949 Merge branch 'master' into add_ingest_documents_script 2023-04-14 18:12:23 +02:00
Richard Beales
d5423fdcaf Merge pull request #1312 from mikekelly/more-robust-log-dir-path
More robust log dir path
2023-04-14 17:11:56 +01:00
Maiko Bossuyt
a67818648e Update browse.py
linting
2023-04-14 18:10:42 +02:00
Tymec
34eac5754c test: fix typo and add newline at the end
- Fixed "embeder" typo to "embedder"
- Added newline at the end of test unit
2023-04-14 18:06:47 +02:00
Tymec
2a147acd3f refactor: fix typo
Changed all occurrences of "embeder" to "embedder".
2023-04-14 17:58:29 +02:00
Tymec
121f4e606c fix: more modular approach for embedding dimension 2023-04-14 17:17:10 +02:00
Pi
71ae22fc7a Merge pull request #1323 from sagarishere/patch-13
update to modern python format syntax
2023-04-14 15:16:30 +01:00
Pi
9706ae8611 Merge pull request #1118 from sweetlilmre/readme-cleanup
Readme cleanup
2023-04-14 14:46:31 +01:00
Pi
aca16dbc5d Merge pull request #754 from meta-fx/added-new-voice
Added new env variable and speech function for alternative TTS voice
2023-04-14 14:45:11 +01:00
sagarishere
b18530a985 update to modern python format syntax
update to modern python format syntax

no logic change
2023-04-14 16:31:45 +03:00
Tymec
653904a359 chore: added memory embeder option to dotenv template 2023-04-14 15:07:13 +02:00
Tymec
fb6684450c test: added tests for memory embeder 2023-04-14 14:56:58 +02:00
Tymec
b042376db4 docs: added comments 2023-04-14 14:53:18 +02:00
Tymec
64db4eef39 fix: added back numpy to requirements 2023-04-14 14:47:13 +02:00
Tymec
967c9270ce feat: ability to use local embeddings model (sBERT) 2023-04-14 14:45:44 +02:00
Mike Kelly
9e27e0165d gitignore the logs file 2023-04-14 13:19:30 +01:00
Mike Kelly
475edd3b40 make the path reference in logger more robust 2023-04-14 12:57:30 +01:00
Juro Oravec
02f23db210 docs: Explain OpenAI billing for API key 2023-04-14 13:00:55 +02:00
Maiko Bossuyt
c0462dbe77 Update file_operations.py
fixed linting
2023-04-14 10:35:52 +02:00
Maiko Bossuyt
6403bf1127 Update data_ingestion.py
fixed linting
2023-04-14 10:35:30 +02:00
Maiko Bossuyt
e147788c72 Update .env.template
BROWSE_CHUNK_MAX_LENGTH default value
2023-04-14 10:33:34 +02:00
Peter Edwards
087ee6ecd0 Merge branch 'readme-cleanup' of https://github.com/sweetlilmre/Auto-GPT into readme-cleanup 2023-04-14 09:20:10 +02:00
Peter Edwards
5c31f46b45 Merge remote-tracking branch 'upstream/master' into readme-cleanup 2023-04-14 09:19:51 +02:00
meta-fx
1612069594 Fixed E302 expected 2 blank lines, found 1 2023-04-14 02:18:17 -05:00
sea
988d6d877f Update consistent code command style 2023-04-14 15:13:50 +08:00
Richard Beales
98efd26456 Merge pull request #1197 from Ronbalt/patch-1
Enable Custom Search API in gcp project
2023-04-14 08:04:28 +01:00
Richard Beales
43935e25f6 Merge pull request #1220 from JesseRWeigel/patch-1
fix misspelling
2023-04-14 07:37:08 +01:00
Richard Beales
13ba2e8165 Merge pull request #1158 from merwanehamadi/feature/wrap-infinite-loop-in-agent-class
wrap infinite loop in class agent
2023-04-14 07:30:04 +01:00
meta-fx
2fd96b68bd Added new line and elevenlabs elements back to the env 2023-04-14 01:28:47 -05:00
Richard Beales
7c4510dfac Merge pull request #1232 from eng-cc/cc-0414-devcontainer
[environments] add devcontainer config
2023-04-14 07:28:39 +01:00
meta-fx
261887cc8e Merge remote-tracking branch 'upstream/master' into added-new-voice 2023-04-14 01:27:34 -05:00
Richard Beales
cba276075b Merge pull request #1231 from sunnypranay/improved-dockerfile
Improve Dockerfile with best practices and optimizations
2023-04-14 07:25:04 +01:00
Richard Beales
2c279ebafe Merge pull request #1242 from morsoli/master
Resolving Unicode encoding issues
2023-04-14 07:23:05 +01:00
Richard Beales
d79184b689 Merge pull request #1236 from zzzgydi/master
fix: remove duplicate debug mode logger
2023-04-14 07:15:47 +01:00
Richard Beales
646cc2be93 Merge pull request #1034 from merwanehamadi/feature/remove-useless-load_variables_method
remove useless load_variables_method
2023-04-14 07:13:34 +01:00
meta-fx
3d783e08bc Resolved conflicts 2023-04-13 22:47:21 -05:00
莫尔索
5e6d0b620a Resolving Unicode encoding issues
Solve the problem that Chinese, Japanese, Korean and other non-English languages are all encoded in Unicode when writing ai_settings.yaml configuration.
2023-04-14 11:38:29 +08:00
GyDi
3128397988 fix: remove duplicate debug mode logger 2023-04-14 11:17:46 +08:00
eng-cc
aeb81aa597 [environments] add devcontainer environment 2023-04-14 10:54:59 +08:00
sunnypranay
1f21998f0c Improve Dockerfile with best practices and optimizations 2023-04-13 21:47:28 -05:00
Jesse R Weigel
4666ea0150 fix misspelling 2023-04-13 21:57:31 -04:00
Eesa Hamza
4f923ece60 Added double_check logging to AI Settings validator, and updated README for 'no_memory' 2023-04-14 01:56:45 +03:00
Eesa Hamza
6702a04f76 Add 'no_memory' support for memory flag 2023-04-14 01:50:13 +03:00
Maiko Bossuyt
869373fbfc Merge branch 'master' into add_ingest_documents_script 2023-04-14 00:49:32 +02:00
Maiko Bossuyt
25509f9d25 Update config.py
8192 is the current default
2023-04-14 00:48:07 +02:00
Maiko Bossuyt
c4a45eb406 Merge branch 'master' into add_website_memory 2023-04-14 00:45:41 +02:00
Eesa Hamza
8472bbd455 Added 'Command Line Arguments' section to README 2023-04-14 01:34:30 +03:00
Merwane Hamadi
43efbff4b8 remove useless load_variables_method 2023-04-13 15:26:38 -07:00
Eesa Hamza
05f6e9673f Resolve Linter Issues 2023-04-14 01:23:23 +03:00
Eesa Hamza
47b72df262 Added 'AI_SETTINGS_FILE' to .env 2023-04-14 01:20:43 +03:00
Merwane Hamadi
c59b6b5543 wrap infinite loop in class agent 2023-04-13 15:19:41 -07:00
EH
c7cf00d667 Merge branch 'master' into flags-updates 2023-04-13 23:17:18 +01:00
Ron Balter
f9cbddc9f0 Enable Custom Search API in gcp project
While following this guide to enable google search, this step was missing for me and the API calls to https://customsearch.googleapis.com/customsearch/v1?q= failed with:
"""
Custom Search API has not been used in project <PROJECT_ID> before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/customsearch.googleapis.com/overview?project=<PROJECT_ID> then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
"""

Also, checked that merely https://console.developers.google.com/apis/api/customsearch.googleapis.com redirects to the active project used in the last session in GCP, so no need to provide the projectId parameter.
2023-04-14 00:58:51 +03:00
Richard Beales
a3024ca80d Merge pull request #774 from Sma-Das/remove_imports
[Cleanup] Removed unneeded imports
2023-04-13 22:23:11 +01:00
Sma Das
439e736b8b Removed unneeded imports 2023-04-13 17:00:03 -04:00
Pi
361fed4e32 Merge pull request #1144 from Torantulino/richbeales-patch-1
Update README - Discord link and unit test link
2023-04-13 21:45:16 +01:00
Richard Beales
04f063130f Merge branch 'master' into richbeales-patch-1 2023-04-13 21:43:10 +01:00
Richard Beales
09f529ebda Merge pull request #836 from mikekelly/add-docker-compose
Add docker compose scheduling
2023-04-13 21:38:43 +01:00
Richard Beales
4b2870fcec Merge pull request #1016 from josephcmiller2/continuous-mode-limit
Continuous mode limit
2023-04-13 21:04:24 +01:00
Richard Beales
7fdacd88e5 Merge pull request #980 from lekapsy/patch-1
Improve .env File Organization, Readability, and Documentation
2023-04-13 21:01:45 +01:00
Richard Beales
d8681435c7 Merge pull request #970 from suclogger/fix-reading-config-file-encoding
Use UTF-8 encoding for reading config file.
2023-04-13 20:59:41 +01:00
Richard Beales
90c05326d8 Merge pull request #1125 from MoElaSec/master
[README] Link to 11Labs website to obtain API_KEY
2023-04-13 20:42:54 +01:00
Richard Beales
905e1b4012 Merge pull request #1120 from sagarishere/patch-5
Fix twitter link: in README.md
2023-04-13 20:40:37 +01:00
Richard Beales
b8301118a6 Merge pull request #1142 from melambert/api-error
Simple retry on Open AI chat if a Rate Limit or 502 Bad Gateway…
2023-04-13 20:39:56 +01:00
Joseph C. Miller, II
f3e9739501 Revert inadvertent change 2023-04-13 13:35:46 -06:00
Richard Beales
b8cf6fae10 Merge pull request #1138 from Celthi/txt-dev
skip getting relevant memory if no message history
2023-04-13 20:34:38 +01:00
Richard Beales
58a7fad381 Merge pull request #837 from AlrikOlson/prompt-generator
Refactor seed prompt loading: replace text file with Python class for easier maintenance
2023-04-13 20:25:42 +01:00
Joseph C. Miller, II
95b93045be Exit message should be yellow 2023-04-13 13:16:14 -06:00
Joseph C. Miller, II
56b3fc916e Merge with master 2023-04-13 12:51:36 -06:00
Alrik Olson
8186ccb56a formatting 2023-04-13 11:36:48 -07:00
Alrik Olson
94845aa0e1 Merge branch 'master' into prompt-generator 2023-04-13 11:34:43 -07:00
Richard Beales
af40de5342 Merge pull request #884 from Androbin/patch-4
Fix JSON formatting in prompt.txt
2023-04-13 19:23:41 +01:00
cs0lar
a94b93b38e fixed conflicts 2023-04-13 19:20:52 +01:00
Richard Beales
2a7bc5cb5c Merge pull request #1156 from Torantulino/master
Pull Latest (Batch 3 PRs) from master into stable
2023-04-13 19:00:48 +01:00
Alrik Olson
412e48c599 Merge branch 'master' into prompt-generator 2023-04-13 11:00:04 -07:00
Pi
36dc481ed4 Merge pull request #1155 from Torantulino/richbeales-patch-2
Flake8 linter fix E302
2023-04-13 18:58:37 +01:00
Richard Beales
f4ff62f0cb Flake8 linter fix E302 2023-04-13 18:57:14 +01:00
Alrik Olson
9b256a3dd5 Merge branch 'master' into prompt-generator 2023-04-13 10:54:39 -07:00
cs0lar
0c3562fcdd fixed config bug 2023-04-13 18:50:56 +01:00
Richard Beales
ff52b204c3 Merge pull request #1147 from Torantulino/richbeales-patch-2
Hotfix - re-add missing cfg variable to memory/base
2023-04-13 18:35:31 +01:00
Richard Beales
0488385f2c Merge pull request #1151 from nponeccop/pr-whitespace-E302
Fix flake8 E302
2023-04-13 18:34:00 +01:00
cs0lar
2f8cf68762 fixed conflicts 2023-04-13 18:33:13 +01:00
Andy Melnikov
6bb4ca0bff Fix flake8 E302 2023-04-13 19:32:35 +02:00
Richard Beales
23e5f3c3c6 Merge pull request #1148 from ezzcodeezzlife/master
remove output to set OpenAI API key in config.py
2023-04-13 18:24:58 +01:00
cs0lar
067e697b8b fixed weaviate test and fixed conflicts 2023-04-13 18:24:43 +01:00
Richard Beales
529646926e Merge pull request #1053 from ishworpanta10/patch-1
Update serial in Installation step
2023-04-13 18:07:28 +01:00
Richard Beales
6f4b2ffb4a Merge pull request #1038 from primaryobjects/azure-ad
Config option for azure_ad to support Managed Identities
2023-04-13 18:05:33 +01:00
fabi.s
1da9dbe671 remove output to set OpenAI API key in config.py 2023-04-13 19:05:23 +02:00
Richard Beales
53c00b4199 Hotfix - re-add missing cfg variable to memory/base 2023-04-13 18:01:12 +01:00
Richard Beales
0ba2956ee4 Merge pull request #1014 from drikusroor/fix-flake8-issues-pt-2
Fix flake8 issues pt. 2 (Add E231 & E302 flake8 rules)
2023-04-13 17:56:06 +01:00
Richard Beales
5e4b0def8c Merge pull request #1033 from merwanehamadi/feature/put-loop-in-if-main
put loop in in if main
2023-04-13 17:54:59 +01:00
Kory Becker
36f0570e93 Merge branch 'master' into azure-ad 2023-04-13 12:53:57 -04:00
Kory Becker
da247ca600 merge fix 2023-04-13 12:47:16 -04:00
Richard Beales
cc0bd3b962 Merge pull request #1121 from ymarouani/bugFix/temperatureType
temperature should not be an Int. it can be any value between 0-1
2023-04-13 17:38:26 +01:00
Richard Beales
f98fba6657 Update README - Discord link and unit test link
Use new url for discord, correct link to ci.yaml workflow.
2023-04-13 17:33:22 +01:00
Mark
d2f75e8659 Added simple retry on Open AI chat if a Rate Limit or 502 Bad Gateway error received 2023-04-13 17:23:16 +01:00
lekapsy
c0beeeb6b2 Merge branch 'master' into patch-1 2023-04-13 18:13:32 +02:00
Merwane Hamadi
b112f5ebfa put loop in in if main 2023-04-13 09:09:38 -07:00
celthi
c5188d5611 skip getting relevant memory if no message history 2023-04-13 23:41:09 +08:00
Pi
cc8fc99c50 Merge pull request #938 from 6rzes/fix_encoding_charmap_utf
fix  Error: 'charmap' codec can't encode character '\u0142' in position 99: character maps to<undefined>
2023-04-13 16:17:08 +01:00
Kory Becker
6dd6b1f878 Merge branch 'master' into azure-ad 2023-04-13 11:11:56 -04:00
Alrik Olson
2a62394112 add: execute shell command to prompt.py 2023-04-13 07:56:56 -07:00
Alrik Olson
1bb056e9c9 Merge branch 'master' into prompt-generator 2023-04-13 07:56:42 -07:00
Joseph C. Miller, II
2f7a402040 Use yellow instead of red for termination message 2023-04-13 08:49:22 -06:00
Pi
a3efbd0bee Merge pull request #1011 from cryptidv/redis-logging
Improved logging on connection fail to a Memory Backend
2023-04-13 15:47:41 +01:00
Pi
c3adb1950b Merge pull request #1031 from DerekCL/feature/readme-lint-fix
Linter Autofix for Readme.md
2023-04-13 15:38:24 +01:00
Pi
1a8a757d72 Merge pull request #1022 from leondz/patch-1
Auto-GPT requires numpy -- added to requirements.txt
2023-04-13 15:34:50 +01:00
ShifraSec
f7910e85ce Link to 11Labs website to obtain API_KEY 2023-04-13 18:33:20 +04:00
Pi
dcf379c3e2 Merge pull request #1032 from merwanehamadi/feature/ability-have-no-memory
Feature/ability have no memory
2023-04-13 15:27:19 +01:00
Yossi Marouani
dd15900804 temperature should not be an Int. it can be any value between 0-1 2023-04-13 17:19:02 +03:00
sagarishere
ccfb568694 Fix twitter link: in README.md
Fixed twitter link to go to:
https://twitter.com/SigGravitas

Previously it was going to the icon image.
2023-04-13 17:08:23 +03:00
Peter Edwards
825b0eb5b2 Merge branch 'Torantulino:master' into readme-cleanup 2023-04-13 16:02:43 +02:00
Peter Edwards
41f17f8904 Small README.md clarity update and usage fixup 2023-04-13 16:02:15 +02:00
Pi
294fa5f85e Merge pull request #1050 from Torantulino/richbeales-patch-1
Correct link to unit tests in README
2023-04-13 14:54:10 +01:00
Pi
cc9bb19847 Merge branch 'master' into richbeales-patch-1 2023-04-13 14:52:54 +01:00
Pi
a956136421 Merge pull request #1065 from jiangying000/patch-1
Update README.md on log location
2023-04-13 14:45:53 +01:00
Pi
befa70b0e1 Merge pull request #1068 from Thakay/fixed-redundancy
Removed redundant cfg object creation in base memory file
2023-04-13 14:44:50 +01:00
Pi
5d93fbdd5c Merge pull request #1071 from sagarishere/patch-3
Typo: in PULL_REQUEST_TEMPLATE.md
2023-04-13 14:43:13 +01:00
Pi
ce8ce5a896 Merge pull request #1072 from WalterSumbon/master
replace deprecated function with current equivalent
2023-04-13 14:41:28 +01:00
Pi
d5ba889168 Merge pull request #1087 from digger-yu/patch-1
Update test_json_parser.py
2023-04-13 14:38:58 +01:00
Pi
4c11d72ade Merge pull request #1095 from sagarishere/patch-4
Remove deprecated (404) links, and add alt-text to one image: Update …
2023-04-13 14:32:55 +01:00
Ishwor Panta
3358bd453e Merge branch 'master' into patch-1 2023-04-13 19:10:06 +05:45
Eesa Hamza
ff094c7ecc Resolve Linter Issues 2023-04-13 15:09:24 +03:00
Maiko Bossuyt
334400edd1 Merge branch 'Torantulino:master' into add_ingest_documents_script 2023-04-13 13:50:41 +02:00
Maiko Bossuyt
1b49c1d37a Merge branch 'master' into add_website_memory 2023-04-13 13:47:46 +02:00
sagarishere
f3fb810979 Remove deprecated (404) links, and add alt-text to one image: Update README.md
1. Removed the link to Unit-tests as that link is deprecated and on clicking on it, it says, that workflow no longer exists.

2. Added alt text to the Discord link, following the convention from Twitter link alt text
2023-04-13 14:37:56 +03:00
Eesa Hamza
a10ffc1dbe Fixed error logging when choosing non-supported memory backend with '--use-memory' 2023-04-13 14:26:16 +03:00
Eesa Hamza
0f6fba7d65 Implemented the '--ai-settings' flag 2023-04-13 14:02:42 +03:00
digger-yu
0c7b7e5de8 Update test_json_parser.py
Optimize part of the code to maintain uniformity
2023-04-13 18:43:32 +08:00
Eesa Hamza
428caa9bef Added flags, and implemented skip-reprompt 2023-04-13 12:57:57 +03:00
Drikus Roor
abe01ab81e fix: Fix flake8 linting errors 2023-04-13 11:05:36 +02:00
Drikus Roor
62edc148a8 chore: Remove functions that had been removed on the master branch recently 2023-04-13 10:56:02 +02:00
Drikus Roor
d1ea6cf002 lint: Fix all E302 linting errors 2023-04-13 10:50:51 +02:00
Drikus Roor
04dc0f7149 lint: Add flake8 rule E302 to the flake8 workflow job 2023-04-13 10:50:27 +02:00
Drikus Roor
4afd0a3714 lint: Fix E231 flake8 linting errors 2023-04-13 10:50:27 +02:00
Drikus Roor
8ff36bb8ba lint: Add rule E231 to the flake8 linting job 2023-04-13 10:50:26 +02:00
Drikus Roor
947d27a9ed docs: Update README.md with the flake8 command used in the CI 2023-04-13 10:50:26 +02:00
WalterSumbon
4c7eef550a replace deprecated function with current equivalent 2023-04-13 16:45:08 +08:00
sagarishere
aa4118d4b9 Typo: in PULL_REQUEST_TEMPLATE.md
Typo
2023-04-13 11:27:15 +03:00
Kasra Amini
0061976a91 Removed redundant cfg object creation in base memory file 2023-04-13 03:32:39 -04:00
jiangying
d938013084 Update README.md on log location 2023-04-13 15:06:29 +08:00
Richard Beales
c8b8673286 Merge pull request #802 from sweetlilmre/more_azure
More fixes for Azure hosting
2023-04-13 07:58:41 +01:00
Richard Beales
17b84099dc Merge pull request #1062 from sagarishere/patch-1
Update .gitignore
2023-04-13 07:56:54 +01:00
Richard Beales
9c3f2a9c81 Merge pull request #1044 from fqd511/patch-1
add link for pinecone in README
2023-04-13 07:55:49 +01:00
Peter Edwards
1a64a60296 Merge branch 'Torantulino:master' into more_azure 2023-04-13 08:53:20 +02:00
Richard Beales
2ca7652e50 Merge pull request #810 from nponeccop/pr-whitespace-E303
Fix flake8 E303,W293,W291,W292,E305
2023-04-13 07:51:22 +01:00
Andy Melnikov
e635cf4a4a Fix flake8 E303,W293,W291,W292,E305
Some of the previously fixed things creeped in. I would like to keep the
requirements low on the PR authors for now and gradually improve the
pep8 compliance without introducing much breakage.
2023-04-13 08:49:20 +02:00
Peter Edwards
285627e216 remove trailing whitespace 2023-04-13 08:41:18 +02:00
Peter Edwards
d46e1fb755 Merge remote-tracking branch 'upstream/master' into more_azure 2023-04-13 08:38:30 +02:00
Richard Beales
73378b176c Merge pull request #742 from Artemonim/delete-settings-as-file
Delete ai_settings.yaml
2023-04-13 07:35:47 +01:00
Peter Edwards
84c72d4f8c Merge remote-tracking branch 'upstream/master' into more_azure 2023-04-13 08:35:13 +02:00
Peter Edwards
fc13122d56 Merge branch 'more_azure' of https://github.com/sweetlilmre/Auto-GPT into more_azure 2023-04-13 08:29:14 +02:00
Peter Edwards
bc4bca65d5 Fix for python < 3.10 2023-04-13 08:29:07 +02:00
Richard Beales
886117d82e Merge pull request #1028 from merwanehamadi/feature/add-ability-change-temperature
Feature/add ability change temperature
2023-04-13 07:28:15 +01:00
Richard Beales
dabaa9d6a8 Merge pull request #965 from nolan23/master
pull image if it's not found locally
2023-04-13 07:26:30 +01:00
sagarishere
9f2d609be3 Update .gitignore
Added ignoring .DS_Store for mac environments.

It can be assumed that some developers may not have global .gitignore settings for their environments. This will ease the friction of rejected push, commit rollback, again putting changes from stashes and then again pushing.

Can save precious time for some devs using Mac systems.
2023-04-13 09:24:15 +03:00
Richard Beales
49b65590ea Merge pull request #781 from chozzz/fix/refactor-code
Remove duplicated unnecessary codes
2023-04-13 07:17:37 +01:00
Richard Beales
8e30edba2c Merge pull request #463 from muellerberndt/shellcommands
Add capability to execute shell commands
2023-04-13 07:16:52 +01:00
Peter Edwards
5cec481711 Merge branch 'Torantulino:master' into more_azure 2023-04-13 08:13:03 +02:00
Peter Edwards
bcdb4e476f Merge remote-tracking branch 'upstream/master' into more_azure 2023-04-13 08:12:29 +02:00
Richard Beales
4e4af3ed26 Merge pull request #780 from coditamar/browse_scrape_links_test_and_validate
browse: (1) apply url validation also to scrape_links(), (2) add unit-tests for scrape_links()
2023-04-13 07:10:06 +01:00
Bernhard Mueller
f79c7c4d1e Fix linter errors 2023-04-13 12:43:25 +07:00
cs0lar
855de1890f Merge branch 'master' into feature/weaviate-memory 2023-04-13 06:23:36 +01:00
Ishwor Panta
1546c24441 Update serial in Installation step
updated serial in Installation step from 1 rather than from 0.
2023-04-13 10:52:35 +05:45
Richard Beales
65abfc9d3d Correct link to unit tests in README 2023-04-13 06:01:28 +01:00
Bernhard Mueller
946700fcf7 Change workdir logic 2023-04-13 11:27:43 +07:00
Bernhard Mueller
3ff2323450 Rename command & functions to execute_shell 2023-04-13 11:04:26 +07:00
511
91d1f3eca8 add link for pinecone in README 2023-04-13 11:43:09 +08:00
Alrik Olson
a3eedfcd2f Merge branch 'master' into prompt-generator 2023-04-12 19:48:33 -07:00
Kory Becker
db9f8a2749 Added config option for OPENAI_API_TYPE=azure_ad 2023-04-12 22:14:51 -04:00
Merwane Hamadi
84c128fd0f Create NoMemory provider as a memory provider that does not store any data 2023-04-12 17:48:11 -07:00
Merwane Hamadi
62bd93a4d2 Import NoMemory and add it as a memory_backend option in get_memory function 2023-04-12 17:48:08 -07:00
derekcl
746cd5bc88 linter autofix 2023-04-12 19:38:38 -05:00
vadi
c594200195 Remove duplicated unnecessary config instance 2023-04-13 10:01:52 +10:00
Merwane Hamadi
046c49c90b add temperature to .env.template 2023-04-12 16:28:21 -07:00
Merwane Hamadi
f4831348e8 Use configured temperature value in create_chat_completion function 2023-04-12 16:27:11 -07:00
Merwane Hamadi
4faba1fdd8 Add temperature configuration option to Config class 2023-04-12 16:27:01 -07:00
Leon Derczynski
129d734a4c Auto-GPT requires numpy 2023-04-12 16:15:21 -07:00
Joseph C. Miller, II
a5d4ffd945 Merge branch 'master' into continuous-mode-limit 2023-04-12 15:50:05 -06:00
Joseph C. Miller, II
d706a3436d Make configuration similar to other arguments. 2023-04-12 15:39:25 -06:00
Joseph C. Miller, II
12e1fcca92 Correct the checking for continuous limit without continuous mode 2023-04-12 15:36:35 -06:00
Maiko Bossuyt
1c64a9d245 Update .env.template 2023-04-12 23:33:14 +02:00
Maiko Bossuyt
36d455c20e split_file() rework
rework the split_file function to make it simple and only have one yield while providing an overlap at the start and end of each chunk
2023-04-12 23:31:26 +02:00
Joseph C. Miller, II
5badde2c27 Add message to explain exit. 2023-04-12 15:30:34 -06:00
Maiko Bossuyt
3a2ccbd02f Merge branch 'master' into add_website_memory 2023-04-12 23:18:09 +02:00
Itamar Friedman
3e53e976a5 flake8 style 2023-04-13 00:06:23 +03:00
Itamar Friedman
bf3c76ced7 flake8 style 2023-04-13 00:04:08 +03:00
Itamar Friedman
9f972f4ee9 flake8 style 2023-04-13 00:00:33 +03:00
Itamar Friedman
a40ccc1e5d flake8 style 2023-04-12 23:53:40 +03:00
Maiko Bossuyt
2f1181f9a1 Update .gitignore 2023-04-12 22:52:37 +02:00
Maiko Bossuyt
2c8b42307b Merge branch 'Torantulino:master' into add_ingest_documents_script 2023-04-12 22:52:08 +02:00
Itamar Friedman
54478b35f2 Merge branch 'master' into browse_scrape_links_test_and_validate 2023-04-12 23:51:53 +03:00
Maiko Bossuyt
4e914e5ec1 Revert "Update .gitignore"
This reverts commit 7975c184a5.
2023-04-12 22:51:52 +02:00
Eesa Hamza
76b5b95744 Attempt to fix Linter issues 2023-04-12 23:49:32 +03:00
Maiko Bossuyt
2e0b44ae05 fix chunk creation
the last chunk wasn't correctly created, this commit fix that issue.
2023-04-12 22:46:49 +02:00
Eesa Hamza
8c51fe8373 Added new logging function as an error or warning message 2023-04-12 23:38:53 +03:00
Eesa Hamza
5d871f04e6 Added pinecone connectivity check and added relevant logging 2023-04-12 23:29:54 +03:00
Eesa Hamza
a850c27dd5 Improved logging on connection fail to redis 2023-04-12 23:13:34 +03:00
Itamar Friedman
57bca3620e minor style 2023-04-12 23:04:43 +03:00
Itamar Friedman
c63645cbba redo suggested changes. move unit test files to the fitting directory 2023-04-12 22:41:23 +03:00
Itamar Friedman
7c0c896600 Merge branch 'master' into browse_scrape_links_test_and_validate 2023-04-12 22:31:51 +03:00
Artemonim
d699e764ef Merge remote-tracking branch 'upstream/master' into delete-settings-as-file 2023-04-12 21:56:33 +03:00
cs0lar
5592dbd277 resolved latest conflicts 2023-04-12 19:54:56 +01:00
Artemonim
fdb4c99447 Merge branch 'master' into delete-settings-as-file 2023-04-12 21:54:34 +03:00
Maiko Bossuyt
65cc4f833f Add Memory Pre-Seeding information to readme.md
Add the documentation for memory pre-seeding
2023-04-12 20:47:46 +02:00
Maiko Bossuyt
4c30b47bbc Merge branch 'add_ingest_documents_script' of https://github.com/maiko/Auto-GPT into add_ingest_documents_script 2023-04-12 20:40:41 +02:00
Maiko Bossuyt
2afc89a1a4 Merge branch 'Torantulino:master' into add_ingest_documents_script 2023-04-12 20:40:11 +02:00
lekapsy
f9e104208d Merge branch 'master' into patch-1 2023-04-12 20:40:05 +02:00
Maiko Bossuyt
280647ff38 Update data_ingestion.py
move the search_file function inside the data_ingestion script
add memory initialisation argument
add overlap argument
add chunk max_length argument
2023-04-12 20:19:36 +02:00
Maiko Bossuyt
4465486ea3 Update file_operations.py
move the search_file function inside the data_ingestion script
2023-04-12 20:19:27 +02:00
cs0lar
530894608b added support of API key based auth 2023-04-12 19:09:52 +01:00
cs0lar
b7d0cc3b24 removed the extra class property 2023-04-12 19:00:30 +01:00
cs0lar
35ecd95c49 removed unnecessary flush() 2023-04-12 18:56:42 +01:00
cs0lar
415c1cb4b5 fixed quotes 2023-04-12 18:55:34 +01:00
cs0lar
b9a4f97790 resolved latest conflicts 2023-04-12 18:52:06 +01:00
Maiko Bossuyt
8faa6ef949 Create data_ingestion.py
This script is use when we want to seed Auto-GPT memory with one or multiple documents.

The document are read, split into chunks and store in the memory.
2023-04-12 19:47:51 +02:00
Maiko Bossuyt
c91117616f Update file_operations.py
revert change in import as we don't need them
2023-04-12 19:46:58 +02:00
Maiko Bossuyt
7975c184a5 Update .gitignore
add new log file to gitignore
2023-04-12 19:46:39 +02:00
Maiko Bossuyt
137751f95c Merge branch 'Torantulino:master' into add_ingest_documents_script 2023-04-12 19:36:40 +02:00
lekapsy
7729f198d4 Merge branch 'master' into patch-1 2023-04-12 19:17:34 +02:00
Maiko Bossuyt
d7609b3095 Merge branch 'add_ingest_documents_script' of https://github.com/maiko/Auto-GPT into add_ingest_documents_script 2023-04-12 19:13:26 +02:00
Maiko Bossuyt
0dddc94bda Add file ingestion methode in file_operations.py
Add the following functions to ingest data into memory before Auto-GPT run.

- split_file: given a content, split it in chunks of max_length with (or without) a specified overlap

- ingest_file: read a file, use split_file to split it in chunks and load each chunk in memory

- ingest_directory: ingest all files in a directory in memory
2023-04-12 19:13:04 +02:00
cs0lar
67b84b5811 added client install 2023-04-12 17:54:59 +01:00
lekapsy
d237cf3d87 Improve .env File Organization, Readability, and Documentation
This pull request aims to enhance the organization, readability, and understanding of the .env.template file for users when they modify the settings. The changes include organizing the file in a tree-like structure with appropriate comments, providing clear guidance for users about the purpose of each variable, their possible values, and default settings when applicable.

As a user with no prior knowledge of best practices of contributing to a project / .env.template file documentation, I took the liberty to make changes to the file based on what I would have liked to have seen when I first encountered it. My goal was to include every configurable option for ease of use and better understanding of how the code works.

The key improvements made in this pull request are:

1. Grouping related variables under appropriate headers for better organization and ease of navigation.
2. Adding informative comments for each variable to help users understand their purpose and possible values.
3. Including default values in the comments to inform users of the consequences of not providing a specific value for a variable, allowing them to make 
    informed decisions when configuring the application.
4. Formatting the file consistently for better readability.

These changes will enhance user experience by simplifying the configuration process and reducing potential confusion. Users can quickly and easily configure the application without having to search through the code to determine default values or understand the relationship between various settings. Additionally, well-organized code and documentation can lead to fewer issues and misunderstandings, saving time for both users and maintainers of the project.

Please review these changes and let me know if you have any questions or suggestions for further improvement so I can make any necessary adjustments.
2023-04-12 18:54:10 +02:00
Bernhard Mueller
940772b502 Merge branch 'shellcommands' of github.com:muellerberndt/Auto-GPT into shellcommands 2023-04-12 23:47:16 +07:00
Bernhard Mueller
affe77e18c Call subprocess.run with shell=True 2023-04-12 23:46:55 +07:00
Bernhard Mueller
9e8d35277b Update scripts/commands.py
Co-authored-by: Peter Stalman <sarkedev@gmail.com>
2023-04-12 23:32:17 +07:00
Bernhard Mueller
cc9723c26e Make chdir code more robust 2023-04-12 23:30:35 +07:00
cs0lar
e3aea6d6c4 added weaviate embedded section in README 2023-04-12 17:21:37 +01:00
Bernhard Mueller
15dffed6e5 Merge branch 'master' of github.com:Torantulino/Auto-GPT into shellcommands 2023-04-12 23:15:31 +07:00
Maiko Bossuyt
8baa0769b1 Update config.py 2023-04-12 18:03:59 +02:00
Maiko Bossuyt
a615e57061 Revert "Update main.py"
This reverts commit c785352ed2.
2023-04-12 18:00:17 +02:00
profound
c5f0cb3d3f fix read config file encoding that broke Chinese 2023-04-12 23:38:30 +08:00
roby.parapat
730fbf591f pull image if it's not found locally 2023-04-12 22:15:22 +07:00
Maiko Bossuyt
5bb551db95 add the url variable in the get_text_summary function to pass it to the memory
By sending the url along when calling browse.summarize_text, we can then add it along the chunk in memory.
2023-04-12 16:42:14 +02:00
Maiko Bossuyt
b20c0117c5 Add memory management to browse.py
- Change the way User-Agent is handle when calling requests to browse website

- Add chunk to memory before and after summary. We do not save the "summary of summaries" as this wasn't performing great and caused noise when the "question" couldn't be answered.

- Use the newly added config parameters for max_length and max_token
2023-04-12 16:38:49 +02:00
Maiko Bossuyt
c986e87135 Edit config Class to manage browse_website command chunk size and summary size
I added two new config parameters:

- browse_chunk_max_length: define the max_length of a chunk being sent to the memory and to FAST_LLM_MODEL for summarizing

- browse_summary_max_token: define the max_token passed to the model use for summary creation. Changing this can help with complex subject, allowing the agent to be more verbose in its attemps to summarize the chunk and the chunks summary.

I've also edited the way the user_agent is handle.
2023-04-12 16:36:27 +02:00
Alrik Olson
2ef9928a2e Merge remote-tracking branch 'origin/master' into prompt-generator 2023-04-12 07:33:36 -07:00
Maiko Bossuyt
c785352ed2 Update main.py
clean trailing whitespace
2023-04-12 16:23:09 +02:00
Peter Edwards
6fa9501251 Merge branch 'Torantulino:master' into more_azure 2023-04-12 13:28:23 +02:00
Gull Man
c932087997 add encoding to open file 2023-04-12 12:13:18 +02:00
Peter Edwards
650e2dcd6d cleaned up .env to move Azure config to separate azure.yaml file
updated README.md to explain new config
added Azure yaml loader to config class
centralized model retrieval into config class
this commit effectively combines and replaces #700 and #580
2023-04-12 11:27:37 +02:00
Itamar Friedman
2ec42bf3e8 removing compliant whitespace 2023-04-12 12:21:53 +03:00
Itamar Friedman
11abb906dd Merge branch 'master' into browse_scrape_links_test_and_validate 2023-04-12 12:18:55 +03:00
Itamar Friedman
1a7159095a Merge remote-tracking branch 'upstream/master' into browse_scrape_links_test_and_validate 2023-04-12 12:18:16 +03:00
Itamar Friedman
354fc76268 Merge remote-tracking branch 'upstream/master' 2023-04-12 11:56:11 +03:00
Itamar Friedman
e8b7a117da Merge remote-tracking branch 'origin/master' into browse_scrape_links_test_and_validate 2023-04-12 11:51:43 +03:00
Itamar Friedman
98778cea73 Merge remote-tracking branch 'upstream/master' 2023-04-12 11:48:55 +03:00
cs0lar
f2a6ac5dc2 fixed order and removed dupes 2023-04-12 09:20:29 +01:00
cs0lar
75c4132f02 Merge pull request #1 from cs0lar/feature/weaviate-embedded
Feature/weaviate embedded
2023-04-12 08:24:40 +01:00
cs0lar
453b428d33 added support for weaviate embedded 2023-04-12 08:21:41 +01:00
Peter Edwards
17b037faf7 Merge branch 'Torantulino:master' into more_azure 2023-04-12 09:18:17 +02:00
cs0lar
96c5e929be added support for weaviate embedded 2023-04-12 05:40:24 +01:00
Robin Richtsfeld
afc7fa6e26 Fix JSON formatting in prompt.txt 2023-04-12 03:09:08 +02:00
Denis Mozhayskiy
990297b463 Easy run bat files with requirements check 2023-04-12 02:18:07 +03:00
meta-fx
570f76bd51 Removed trailing spaces and fixed CRLF being removed 2023-04-11 14:44:22 -05:00
Alrik Olson
7a0c9e8a9d fix attempts to import a non-existent module 2023-04-11 10:30:53 -07:00
Mike Kelly
de2281d824 add docker compose scheduling 2023-04-11 18:05:29 +01:00
Alrik Olson
8bbfdeb04a Add unit tests for prompt generator class 2023-04-11 09:43:37 -07:00
Alrik Olson
72d4783a1d formatting 2023-04-11 09:21:20 -07:00
Alrik Olson
b73bca6b2d Merge remote-tracking branch 'origin/master' into prompt-generator 2023-04-11 09:17:36 -07:00
Alrik Olson
fd1cfd2eff Add docs and format code 2023-04-11 09:15:45 -07:00
Alrik Olson
b19eb74874 Refactor the seed prompt to be generated programmatically
This removes the tedium of having to re-number every numbered item in the prompt.txt if you want to add/remove commands.
2023-04-11 09:09:59 -07:00
meta-fx
efd6a7ecf5 Merge branch 'master' into added-new-voice 2023-04-11 08:39:15 -05:00
meta-fx
3cdde2d49c Resolved conflicts in config.py and speak.py 2023-04-11 08:15:58 -05:00
cs0lar
3c7767fab0 fixed formatting 2023-04-11 13:51:31 +01:00
cs0lar
786ee6003c fixed formatting 2023-04-11 13:50:02 +01:00
Peter Edwards
23f46adc61 Merge branch 'Torantulino:master' into more_azure 2023-04-11 13:54:07 +02:00
Itamar Friedman
1210ba41d0 Merge remote-tracking branch 'upstream/master' 2023-04-11 14:47:28 +03:00
Peter Edwards
9d33a75083 Changes for Azure embedding handling 2023-04-11 13:45:37 +02:00
coditamar
f6c8a0f289 Merge branch 'master' into browse_scrape_links_test_and_validate 2023-04-11 14:43:57 +03:00
cs0lar
5fe784aabe added weaviate to the supported vector memory providers 2023-04-11 11:14:13 +01:00
Itamar Friedman
64c21ee8f7 browse: make scrape_links() & scrape_text() "status_code >= 400" error message the same 2023-04-11 11:40:52 +03:00
Itamar Friedman
2d5d0131bb browse: (1) apply validation also to scrape_links(), (2) add tests for scrape_links() 2023-04-11 11:17:07 +03:00
BillSchumacher
0b955c0546 Update README.md
Update warning
2023-04-10 22:19:21 -05:00
Bernhard Mueller
aba7956f10 Merge branch 'master' into shellcommands 2023-04-11 09:25:53 +07:00
Bernhard Mueller
0d664ce8d6 Revert ai_settings.yaml 2023-04-11 09:23:52 +07:00
BillSchumacher
65b626c5e1 Plugins initial 2023-04-10 20:57:47 -05:00
meta-fx
3ee62211db Fixed formatting issues 2023-04-10 20:56:27 -05:00
meta-fx
0cf790b633 Added new env variable and speech function for alternative TTS voice 2023-04-10 20:00:43 -05:00
Artemonim
949e554860 Delete ai_settings.yaml 2023-04-11 02:14:51 +03:00
Bernhard Mueller
9598679180 Merge branch 'master' into shellcommands 2023-04-10 23:16:39 +07:00
Bernhard Mueller
09d2f47e08 Introduce EXECUTE_SHELL_COMMANDS config var, default to False 2023-04-10 11:01:48 +07:00
Bernhard Mueller
dd469bf2ae Change working directory during shell command execution 2023-04-10 10:26:54 +07:00
Bernhard Mueller
955b83c136 Make line in prompt more concise 2023-04-10 10:16:24 +07:00
Bernhard Mueller
64da02bf4a Fix merge conflicts 2023-04-10 10:14:35 +07:00
Bernhard Mueller
7867c8dc34 Update prompt (advise GPT to use only shell commands that terminate) 2023-04-09 18:43:20 +07:00
Peter Krenesky
bcc1b5f8bf Merge branch 'master' into command_registry 2023-04-08 16:46:58 -07:00
cs0lar
76a1462e37 moved pinecone api config settings into provider class 2023-04-08 16:11:31 +01:00
cs0lar
97ac802f0c resolved conflicts between master and feature/weaviate-memory 2023-04-08 15:38:21 +01:00
cs0lar
0ce0c553a6 the three memory related commands memory_add, memory_del, memory_ovr are absent in the latest version of execute_command therefore the corresponding handlers commit_memory, delete_memory and overwrite_memory have been removed also because they assume a memory with a different interface than the proposed one. 2023-04-08 08:13:17 +01:00
cs0lar
1e63bc52be Merge branch 'master' into feature/weaviate-memory 2023-04-08 07:45:02 +01:00
Bernhard Mueller
66eb1dcfc5 Add exec_shell command 2023-04-08 12:39:03 +07:00
Bernhard Mueller
4844998b49 Merge pull request #1 from Torantulino/master
Merge from upstream repo
2023-04-08 10:20:52 +07:00
cs0lar
da4ba3c10f added factory tests 2023-04-07 22:07:08 +01:00
cs0lar
986d32ca42 added support for multiple memory provider and added weaviate integration 2023-04-07 20:41:07 +01:00
Peter
3095591064 switch to explicit module imports 2023-04-06 20:00:28 -07:00
Peter
b4a0ef9bab resolving test failures 2023-04-06 19:25:44 -07:00
Peter
e2a6ed6955 adding tests for CommandRegistry 2023-04-06 18:24:53 -07:00
Peter
a24ab0e879 dynamically load commands from registry 2023-04-06 15:42:35 -07:00
ryanmac
29c0b544a4 Delete requirements-mac-Python-3.11.txt
Removing unnecessary files
2023-04-05 20:03:46 -05:00
EricFedrowisch
fa0ec78441 Merge pull request #1 from Torantulino/master
Merging up to latest on Torantulino/Auto-GPT Master
2023-04-04 10:33:38 -05:00
Eric Fedrowisch
6adef8ed7c First draft at adding persistent memory via sqlite3 2023-04-03 19:38:59 -05:00
“Philip
6003d98f3a More specific wording
consistent escaping
2023-04-03 20:35:12 +01:00
ryanmac
6ea2a97e83 Rename requirements-new.txt to requirements-mac-Python-3.11.txt 2023-04-03 14:15:21 -05:00
ryanmac
ac7fefe96e Use playwright instead of requests for browse 2023-04-03 14:05:32 -05:00
Mike Harris
4cde35267b Improve extract_hyperlinks to honor base url 2023-04-03 12:51:50 -04:00
“Philip
f20d6f3fdb Breaking on None and NaN values returned
fix by converting to valid null value for JSON
2023-04-03 15:07:47 +01:00
“Philip
1e07373696 Fix JSON string escaping issue
Fixes an issue where double quotes were not being escaped in JSON strings, causing parse errors.
2023-04-03 14:58:27 +01:00
240 changed files with 17887 additions and 6582 deletions

2
.coveragerc Normal file
View File

@@ -0,0 +1,2 @@
[run]
relative_files = true

13
.devcontainer/Dockerfile Normal file
View File

@@ -0,0 +1,13 @@
# Use an official Python base image from the Docker Hub
FROM python:3.10
# Install browsers
RUN apt-get update && apt-get install -y \
chromium-driver firefox-esr \
ca-certificates
# Install utilities
RUN apt-get install -y curl jq wget git
# Declare working directory
WORKDIR /workspace/Auto-GPT

View File

@@ -0,0 +1,40 @@
{
"dockerComposeFile": "./docker-compose.yml",
"service": "auto-gpt",
"workspaceFolder": "/workspace/Auto-GPT",
"shutdownAction": "stopCompose",
"features": {
"ghcr.io/devcontainers/features/common-utils:2": {
"installZsh": "true",
"username": "vscode",
"userUid": "6942",
"userGid": "6942",
"upgradePackages": "true"
},
"ghcr.io/devcontainers/features/desktop-lite:1": {},
"ghcr.io/devcontainers/features/python:1": "none",
"ghcr.io/devcontainers/features/node:1": "none",
"ghcr.io/devcontainers/features/git:1": {
"version": "latest",
"ppa": "false"
}
},
// Configure tool-specific properties.
"customizations": {
// Configure properties specific to VS Code.
"vscode": {
// Set *default* container specific settings.json values on container create.
"settings": {
"python.defaultInterpreterPath": "/usr/local/bin/python"
}
}
},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "pip3 install --user -r requirements.txt",
// Set `remoteUser` to `root` to connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "vscode"
}

View File

@@ -0,0 +1,19 @@
# To boot the app run the following:
# docker-compose run auto-gpt
version: '3.9'
services:
auto-gpt:
depends_on:
- redis
build:
dockerfile: .devcontainer/Dockerfile
context: ../
tty: true
environment:
MEMORY_BACKEND: ${MEMORY_BACKEND:-redis}
REDIS_HOST: ${REDIS_HOST:-redis}
volumes:
- ../:/workspace/Auto-GPT
redis:
image: 'redis/redis-stack-server:latest'

8
.dockerignore Normal file
View File

@@ -0,0 +1,8 @@
.*
*.template
*.yaml
*.yml
*.md
*.png
!BULLETIN.md

View File

@@ -1,20 +1,214 @@
PINECONE_API_KEY=your-pinecone-api-key
PINECONE_ENV=your-pinecone-region
################################################################################
### AUTO-GPT - GENERAL SETTINGS
################################################################################
## EXECUTE_LOCAL_COMMANDS - Allow local command execution (Default: False)
## RESTRICT_TO_WORKSPACE - Restrict file operations to workspace ./auto_gpt_workspace (Default: True)
# EXECUTE_LOCAL_COMMANDS=False
# RESTRICT_TO_WORKSPACE=True
## USER_AGENT - Define the user-agent used by the requests library to browse website (string)
# USER_AGENT="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"
## AI_SETTINGS_FILE - Specifies which AI Settings file to use (defaults to ai_settings.yaml)
# AI_SETTINGS_FILE=ai_settings.yaml
## AUTHORISE COMMAND KEY - Key to authorise commands
# AUTHORISE_COMMAND_KEY=y
## EXIT_KEY - Key to exit AUTO-GPT
# EXIT_KEY=n
################################################################################
### LLM PROVIDER
################################################################################
### OPENAI
## OPENAI_API_KEY - OpenAI API Key (Example: my-openai-api-key)
## TEMPERATURE - Sets temperature in OpenAI (Default: 0)
## USE_AZURE - Use Azure OpenAI or not (Default: False)
OPENAI_API_KEY=your-openai-api-key
ELEVENLABS_API_KEY=your-elevenlabs-api-key
ELEVENLABS_VOICE_1_ID=your-voice-id
ELEVENLABS_VOICE_2_ID=your-voice-id
SMART_LLM_MODEL=gpt-4
FAST_LLM_MODEL=gpt-3.5-turbo
GOOGLE_API_KEY=
CUSTOM_SEARCH_ENGINE_ID=
USE_AZURE=False
OPENAI_AZURE_API_BASE=your-base-url-for-azure
OPENAI_AZURE_API_VERSION=api-version-for-azure
OPENAI_AZURE_DEPLOYMENT_ID=deployment-id-for-azure
OPENAI_AZURE_CHAT_DEPLOYMENT_ID=deployment-id-for-azure-chat
OPENAI_AZURE_EMBEDDINGS_DEPLOYMENT_ID=deployment-id-for-azure-embeddigs
IMAGE_PROVIDER=dalle
HUGGINGFACE_API_TOKEN=
USE_MAC_OS_TTS=False
MEMORY_BACKEND=local
# TEMPERATURE=0
# USE_AZURE=False
### AZURE
# moved to `azure.yaml.template`
################################################################################
### LLM MODELS
################################################################################
## SMART_LLM_MODEL - Smart language model (Default: gpt-4)
## FAST_LLM_MODEL - Fast language model (Default: gpt-3.5-turbo)
# SMART_LLM_MODEL=gpt-4
# FAST_LLM_MODEL=gpt-3.5-turbo
### LLM MODEL SETTINGS
## FAST_TOKEN_LIMIT - Fast token limit for OpenAI (Default: 4000)
## SMART_TOKEN_LIMIT - Smart token limit for OpenAI (Default: 8000)
## When using --gpt3only this needs to be set to 4000.
# FAST_TOKEN_LIMIT=4000
# SMART_TOKEN_LIMIT=8000
################################################################################
### MEMORY
################################################################################
### MEMORY_BACKEND - Memory backend type
## local - Default
## pinecone - Pinecone (if configured)
## redis - Redis (if configured)
## milvus - Milvus (if configured - also works with Zilliz)
## MEMORY_INDEX - Name of index created in Memory backend (Default: auto-gpt)
# MEMORY_BACKEND=local
# MEMORY_INDEX=auto-gpt
### PINECONE
## PINECONE_API_KEY - Pinecone API Key (Example: my-pinecone-api-key)
## PINECONE_ENV - Pinecone environment (region) (Example: us-west-2)
# PINECONE_API_KEY=your-pinecone-api-key
# PINECONE_ENV=your-pinecone-region
### REDIS
## REDIS_HOST - Redis host (Default: localhost, use "redis" for docker-compose)
## REDIS_PORT - Redis port (Default: 6379)
## REDIS_PASSWORD - Redis password (Default: "")
## WIPE_REDIS_ON_START - Wipes data / index on start (Default: True)
# REDIS_HOST=localhost
# REDIS_PORT=6379
# REDIS_PASSWORD=
# WIPE_REDIS_ON_START=True
### WEAVIATE
## MEMORY_BACKEND - Use 'weaviate' to use Weaviate vector storage
## WEAVIATE_HOST - Weaviate host IP
## WEAVIATE_PORT - Weaviate host port
## WEAVIATE_PROTOCOL - Weaviate host protocol (e.g. 'http')
## USE_WEAVIATE_EMBEDDED - Whether to use Embedded Weaviate
## WEAVIATE_EMBEDDED_PATH - File system path were to persist data when running Embedded Weaviate
## WEAVIATE_USERNAME - Weaviate username
## WEAVIATE_PASSWORD - Weaviate password
## WEAVIATE_API_KEY - Weaviate API key if using API-key-based authentication
# WEAVIATE_HOST="127.0.0.1"
# WEAVIATE_PORT=8080
# WEAVIATE_PROTOCOL="http"
# USE_WEAVIATE_EMBEDDED=False
# WEAVIATE_EMBEDDED_PATH="/home/me/.local/share/weaviate"
# WEAVIATE_USERNAME=
# WEAVIATE_PASSWORD=
# WEAVIATE_API_KEY=
### MILVUS
## MILVUS_ADDR - Milvus remote address (e.g. localhost:19530, https://xxx-xxxx.xxxx.xxxx.zillizcloud.com:443)
## MILVUS_USERNAME - username for your Milvus database
## MILVUS_PASSWORD - password for your Milvus database
## MILVUS_SECURE - True to enable TLS. (Default: False)
## Setting MILVUS_ADDR to a `https://` URL will override this setting.
## MILVUS_COLLECTION - Milvus collection, change it if you want to start a new memory and retain the old memory.
# MILVUS_ADDR=localhost:19530
# MILVUS_USERNAME=
# MILVUS_PASSWORD=
# MILVUS_SECURE=
# MILVUS_COLLECTION=autogpt
################################################################################
### IMAGE GENERATION PROVIDER
################################################################################
### OPEN AI
## IMAGE_PROVIDER - Image provider (Example: dalle)
## IMAGE_SIZE - Image size (Example: 256)
## DALLE: 256, 512, 1024
# IMAGE_PROVIDER=dalle
# IMAGE_SIZE=256
### HUGGINGFACE
## HUGGINGFACE_IMAGE_MODEL - Text-to-image model from Huggingface (Default: CompVis/stable-diffusion-v1-4)
## HUGGINGFACE_API_TOKEN - HuggingFace API token (Example: my-huggingface-api-token)
# HUGGINGFACE_IMAGE_MODEL=CompVis/stable-diffusion-v1-4
# HUGGINGFACE_API_TOKEN=your-huggingface-api-token
### STABLE DIFFUSION WEBUI
## SD_WEBUI_AUTH - Stable diffusion webui username:password pair (Example: username:password)
## SD_WEBUI_URL - Stable diffusion webui API URL (Example: http://127.0.0.1:7860)
# SD_WEBUI_AUTH=
# SD_WEBUI_URL=http://127.0.0.1:7860
################################################################################
### AUDIO TO TEXT PROVIDER
################################################################################
### HUGGINGFACE
# HUGGINGFACE_AUDIO_TO_TEXT_MODEL=facebook/wav2vec2-base-960h
################################################################################
### GIT Provider for repository actions
################################################################################
### GITHUB
## GITHUB_API_KEY - Github API key / PAT (Example: github_pat_123)
## GITHUB_USERNAME - Github username
# GITHUB_API_KEY=github_pat_123
# GITHUB_USERNAME=your-github-username
################################################################################
### WEB BROWSING
################################################################################
### BROWSER
## HEADLESS_BROWSER - Whether to run the browser in headless mode (default: True)
## USE_WEB_BROWSER - Sets the web-browser driver to use with selenium (default: chrome).
## Note: set this to either 'chrome', 'firefox', or 'safari' depending on your current browser
# HEADLESS_BROWSER=True
# USE_WEB_BROWSER=chrome
## BROWSE_CHUNK_MAX_LENGTH - When browsing website, define the length of chunks to summarize (in number of tokens, excluding the response. 75 % of FAST_TOKEN_LIMIT is usually wise )
# BROWSE_CHUNK_MAX_LENGTH=3000
## BROWSE_SPACY_LANGUAGE_MODEL is used to split sentences. Install additional languages via pip, and set the model name here. Example Chinese: python -m spacy download zh_core_web_sm
# BROWSE_SPACY_LANGUAGE_MODEL=en_core_web_sm
### GOOGLE
## GOOGLE_API_KEY - Google API key (Example: my-google-api-key)
## CUSTOM_SEARCH_ENGINE_ID - Custom search engine ID (Example: my-custom-search-engine-id)
# GOOGLE_API_KEY=your-google-api-key
# CUSTOM_SEARCH_ENGINE_ID=your-custom-search-engine-id
################################################################################
### TTS PROVIDER
################################################################################
### MAC OS
## USE_MAC_OS_TTS - Use Mac OS TTS or not (Default: False)
# USE_MAC_OS_TTS=False
### STREAMELEMENTS
## USE_BRIAN_TTS - Use Brian TTS or not (Default: False)
# USE_BRIAN_TTS=False
### ELEVENLABS
## ELEVENLABS_API_KEY - Eleven Labs API key (Example: my-elevenlabs-api-key)
## ELEVENLABS_VOICE_1_ID - Eleven Labs voice 1 ID (Example: my-voice-id-1)
## ELEVENLABS_VOICE_2_ID - Eleven Labs voice 2 ID (Example: my-voice-id-2)
# ELEVENLABS_API_KEY=your-elevenlabs-api-key
# ELEVENLABS_VOICE_1_ID=your-voice-id-1
# ELEVENLABS_VOICE_2_ID=your-voice-id-2
################################################################################
### TWITTER API
################################################################################
# TW_CONSUMER_KEY=
# TW_CONSUMER_SECRET=
# TW_ACCESS_TOKEN=
# TW_ACCESS_TOKEN_SECRET=
################################################################################
### ALLOWLISTED PLUGINS
################################################################################
#ALLOWLISTED_PLUGINS - Sets the listed plugins that are allowed (Example: plugin1,plugin2,plugin3)
ALLOWLISTED_PLUGINS=
################################################################################
### CHAT PLUGIN SETTINGS
################################################################################
# CHAT_MESSAGES_ENABLED - Enable chat messages (Default: False)
# CHAT_MESSAGES_ENABLED=False

4
.envrc Normal file
View File

@@ -0,0 +1,4 @@
# Upon entering directory, direnv requests user permission once to automatically load project dependencies onwards.
# Eliminating the need of running "nix develop github:superherointj/nix-auto-gpt" for Nix users to develop/use Auto-GPT.
[[ -z $IN_NIX_SHELL ]] && use flake github:superherointj/nix-auto-gpt

12
.flake8 Normal file
View File

@@ -0,0 +1,12 @@
[flake8]
max-line-length = 88
select = "E303, W293, W291, W292, E305, E231, E302"
exclude =
.tox,
__pycache__,
*.pyc,
.env
venv*/*,
.venv/*,
reports/*,
dist/*,

5
.gitattributes vendored Normal file
View File

@@ -0,0 +1,5 @@
# Exclude VCR cassettes from stats
tests/**/cassettes/**.y*ml linguist-generated
# Mark documentation as such
docs/**.md linguist-documentation

View File

@@ -2,6 +2,29 @@ name: Bug report 🐛
description: Create a bug report for Auto-GPT.
labels: ['status: needs triage']
body:
- type: markdown
attributes:
value: |
### ⚠️ Before you continue
* Check out our [backlog], [roadmap] and join our [discord] to discuss what's going on
* If you need help, you can ask in the [discussions] section or in [#tech-support]
* **Throughly search the [existing issues] before creating a new one**
[backlog]: https://github.com/orgs/Significant-Gravitas/projects/1
[roadmap]: https://github.com/orgs/Significant-Gravitas/projects/2
[discord]: https://discord.gg/autogpt
[discussions]: https://github.com/Significant-Gravitas/Auto-GPT/discussions
[#tech-support]: https://discord.com/channels/1092243196446249134/1092275629602394184
[existing issues]: https://github.com/Significant-Gravitas/Auto-GPT/issues?q=is%3Aissue
- type: checkboxes
attributes:
label: ⚠️ Search for existing issues first ⚠️
description: >
Please [search the history](https://github.com/Torantulino/Auto-GPT/issues)
to see if an issue already exists for the same problem.
options:
- label: I have searched the existing issues, and there is no existing issue for my problem
required: true
- type: markdown
attributes:
value: |
@@ -19,14 +42,46 @@ body:
- Provide commit-hash (`git rev-parse HEAD` gets it)
- If it's a pip/packages issue, provide pip version, python version
- If it's a crash, provide traceback.
- type: checkboxes
- type: dropdown
attributes:
label: Duplicates
description: Please [search the history](https://github.com/Torantulino/Auto-GPT/issues) to see if an issue already exists for the same problem.
label: Which Operating System are you using?
description: >
Please select the operating system you were using to run Auto-GPT when this problem occurred.
options:
- label: I have searched the existing issues
required: true
- Windows
- Linux
- MacOS
- Docker
- Devcontainer / Codespace
- Windows Subsystem for Linux (WSL)
- Other (Please specify in your problem)
validations:
required: true
- type: dropdown
attributes:
label: Which version of Auto-GPT are you using?
description: |
Please select which version of Auto-GPT you were using when this issue occurred.
If you downloaded the code from the [releases page](https://github.com/Significant-Gravitas/Auto-GPT/releases/) make sure you were using the latest code.
**If you weren't please try with the [latest code](https://github.com/Significant-Gravitas/Auto-GPT/releases/)**.
If installed with git you can run `git branch` to see which version of Auto-GPT you are running.
options:
- Latest Release
- Stable (branch)
- Master (branch)
validations:
required: true
- type: dropdown
attributes:
label: GPT-3 or GPT-4?
description: >
If you are using Auto-GPT with `--gpt3only`, your problems may be caused by
the [limitations](https://github.com/Significant-Gravitas/Auto-GPT/issues?q=is%3Aissue+label%3A%22AI+model+limitation%22) of GPT-3.5.
options:
- GPT-3.5
- GPT-4
validations:
required: true
- type: textarea
attributes:
label: Steps to reproduce 🕹
@@ -43,9 +98,34 @@ body:
- type: textarea
attributes:
label: Your prompt 📝
description: |
If applicable please provide the prompt you are using. You can find your last-used prompt in last_run_ai_settings.yaml.
description: >
If applicable please provide the prompt you are using. Your prompt is stored in your `ai_settings.yaml` file.
value: |
```yaml
# Paste your prompt here
```
- type: textarea
attributes:
label: Your Logs 📒
description: |
Please include the log showing your error and the command that caused it, if applicable.
You can copy it from your terminal or from `logs/activity.log`.
This will help us understand your issue better!
<details>
<summary><i>Example</i></summary>
```log
INFO NEXT ACTION: COMMAND = execute_shell ARGUMENTS = {'command_line': 'some_command'}
INFO -=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=
Traceback (most recent call last):
File "/home/anaconda3/lib/python3.9/site-packages/openai/api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "/home/anaconda3/lib/python3.9/site-packages/openai/api_requestor.py", line 682, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 10982 tokens (10982 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
```
</details>
value: |
```log
<insert your logs here>
```

View File

@@ -1,3 +1,10 @@
<!-- ⚠️ At the moment any non-essential commands are not being merged.
If you want to add non-essential commands to Auto-GPT, please create a plugin instead.
We are expecting to ship plugin support within the week (PR #757).
Resources:
* https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template
-->
<!-- 📢 Announcement
We've recently noticed an increase in pull requests focusing on combining multiple changes. While the intentions behind these PRs are appreciated, it's essential to maintain a clean and manageable git history. To ensure the quality of our repository, we kindly ask you to adhere to the following guidelines when submitting PRs:
@@ -26,8 +33,8 @@ By following these guidelines, your PRs are more likely to be merged quickly aft
- [ ] I have thoroughly tested my changes with multiple different prompts.
- [ ] I have considered potential risks and mitigations for my changes.
- [ ] I have documented my changes clearly and comprehensively.
- [ ] I have not snuck in any "extra" small tweaks changes <!-- Submit these as separate Pull Reqests, they are the easiest to merge! -->
- [ ] I have not snuck in any "extra" small tweaks changes <!-- Submit these as separate Pull Requests, they are the easiest to merge! -->
<!-- If you haven't added tests, please explain why. If you have, check the appropriate box. If you've ensured your PR is atomic and well-documented, check the corresponding boxes. -->
<!-- By submitting this, I agree that my pull request should be closed if I do not fill this out or follow the guide lines. -->
<!-- By submitting this, I agree that my pull request should be closed if I do not fill this out or follow the guidelines. -->

View File

@@ -1,23 +0,0 @@
name: auto-format
on: pull_request
jobs:
format:
runs-on: ubuntu-latest
steps:
- name: Checkout PR branch
uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: autopep8
uses: peter-evans/autopep8@v1
with:
args: --exit-code --recursive --in-place --aggressive --aggressive .
- name: Check for modified files
id: git-check
run: echo "modified=$(if git diff-index --quiet HEAD --; then echo "false"; else echo "true"; fi)" >> $GITHUB_ENV
- name: Push changes
if: steps.git-check.outputs.modified == 'true'
run: |
git config --global user.name 'Torantulino'
git config --global user.email 'toran.richards@gmail.com'
git remote set

31
.github/workflows/benchmarks.yml vendored Normal file
View File

@@ -0,0 +1,31 @@
name: Run Benchmarks
on:
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
env:
python-version: '3.10'
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Python ${{ env.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ env.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: benchmark
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
python benchmark/benchmark_entrepreneur_gpt_with_undecisive_user.py

View File

@@ -2,43 +2,76 @@ name: Python CI
on:
push:
branches:
- master
branches: [ master ]
pull_request:
branches:
- master
branches: [ master ]
concurrency:
group: ${{ format('ci-{0}', github.head_ref && format('pr-{0}', github.event.pull_request.number) || github.sha) }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
jobs:
build:
lint:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.8]
env:
min-python-version: "3.10"
steps:
- name: Check out repository
uses: actions/checkout@v2
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Set up Python ${{ env.min-python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ env.min-python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Lint with flake8
continue-on-error: false
run: flake8 scripts/ tests/ --select E303,W293,W291,W292,E305
- name: Lint with flake8
run: flake8
- name: Run unittest tests with coverage
run: |
coverage run --source=scripts -m unittest discover tests
- name: Check black formatting
run: black . --check
if: success() || failure()
- name: Generate coverage report
run: |
coverage report
coverage xml
- name: Check isort formatting
run: isort . --check
if: success() || failure()
test:
permissions:
# Gives the action the necessary permissions for publishing new
# comments in pull requests.
pull-requests: write
# Gives the action the necessary permissions for pushing data to the
# python-coverage-comment-action branch, and for editing existing
# comments (to avoid publishing multiple comments in the same PR)
contents: write
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.10", "3.11"]
steps:
- name: Check out repository
uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Run unittest tests with coverage
run: |
pytest --cov=autogpt --cov-report term-missing --cov-branch --cov-report xml --cov-report term
- name: Upload coverage reports to Codecov
uses: codecov/codecov-action@v3

View File

@@ -0,0 +1,58 @@
name: Purge Docker CI cache
on:
schedule:
- cron: 20 4 * * 1,4
env:
BASE_BRANCH: master
IMAGE_NAME: auto-gpt
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
build-type: [release, dev]
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- id: build
name: Build image
uses: docker/build-push-action@v3
with:
build-args: BUILD_TYPE=${{ matrix.build-type }}
load: true # save to docker images
# use GHA cache as read-only
cache-to: type=gha,scope=docker-${{ matrix.build-type }},mode=max
- name: Generate build report
env:
event_name: ${{ github.event_name }}
event_ref: ${{ github.event.schedule }}
build_type: ${{ matrix.build-type }}
prod_branch: stable
dev_branch: master
repository: ${{ github.repository }}
base_branch: ${{ github.ref_name != 'master' && github.ref_name != 'stable' && 'master' || 'stable' }}
current_ref: ${{ github.ref_name }}
commit_hash: ${{ github.sha }}
source_url: ${{ format('{0}/tree/{1}', github.event.repository.url, github.sha) }}
push_forced_label:
new_commits_json: ${{ null }}
compare_url_template: ${{ format('/{0}/compare/{{base}}...{{head}}', github.repository) }}
github_context_json: ${{ toJSON(github) }}
job_env_json: ${{ toJSON(env) }}
vars_json: ${{ toJSON(vars) }}
run: .github/workflows/scripts/docker-ci-summary.sh >> $GITHUB_STEP_SUMMARY
continue-on-error: true

115
.github/workflows/docker-ci.yml vendored Normal file
View File

@@ -0,0 +1,115 @@
name: Docker CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
concurrency:
group: ${{ format('docker-ci-{0}', github.head_ref && format('pr-{0}', github.event.pull_request.number) || github.sha) }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
env:
IMAGE_NAME: auto-gpt
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
build-type: [release, dev]
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- if: runner.debug
run: |
ls -al
du -hs *
- id: build
name: Build image
uses: docker/build-push-action@v3
with:
build-args: BUILD_TYPE=${{ matrix.build-type }}
tags: ${{ env.IMAGE_NAME }}
load: true # save to docker images
# cache layers in GitHub Actions cache to speed up builds
cache-from: type=gha,scope=docker-${{ matrix.build-type }}
cache-to: type=gha,scope=docker-${{ matrix.build-type }},mode=max
- name: Generate build report
env:
event_name: ${{ github.event_name }}
event_ref: ${{ github.event.ref }}
event_ref_type: ${{ github.event.ref}}
build_type: ${{ matrix.build-type }}
prod_branch: stable
dev_branch: master
repository: ${{ github.repository }}
base_branch: ${{ github.ref_name != 'master' && github.ref_name != 'stable' && 'master' || 'stable' }}
current_ref: ${{ github.ref_name }}
commit_hash: ${{ github.event.after }}
source_url: ${{ format('{0}/tree/{1}', github.event.repository.url, github.event.release && github.event.release.tag_name || github.sha) }}
push_forced_label: ${{ github.event.forced && '☢️ forced' || '' }}
new_commits_json: ${{ toJSON(github.event.commits) }}
compare_url_template: ${{ format('/{0}/compare/{{base}}...{{head}}', github.repository) }}
github_context_json: ${{ toJSON(github) }}
job_env_json: ${{ toJSON(env) }}
vars_json: ${{ toJSON(vars) }}
run: .github/workflows/scripts/docker-ci-summary.sh >> $GITHUB_STEP_SUMMARY
continue-on-error: true
# Docker setup needs fixing before this is going to work: #1843
test:
runs-on: ubuntu-latest
needs: build
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- id: build
name: Build image
uses: docker/build-push-action@v3
with:
build-args: BUILD_TYPE=dev # include pytest
tags: ${{ env.IMAGE_NAME }}
load: true # save to docker images
# cache layers in GitHub Actions cache to speed up builds
cache-from: type=gha,scope=docker-dev
cache-to: type=gha,scope=docker-dev,mode=max
- id: test
name: Run tests
env:
CI: true
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
set +e
test_output=$(
docker run --env CI --env OPENAI_API_KEY --entrypoint python ${{ env.IMAGE_NAME }} -m \
pytest --cov=autogpt --cov-report term-missing --cov-branch --cov-report xml --cov-report term 2>&1
)
test_failure=$?
echo "$test_output"
cat << $EOF >> $GITHUB_STEP_SUMMARY
# Tests $([ $test_failure = 0 ] && echo '✅' || echo '❌')
\`\`\`
$test_output
\`\`\`
$EOF

81
.github/workflows/docker-release.yml vendored Normal file
View File

@@ -0,0 +1,81 @@
name: Docker Release
on:
release:
types: [ published, edited ]
workflow_dispatch:
inputs:
no_cache:
type: boolean
description: 'Build from scratch, without using cached layers'
env:
IMAGE_NAME: auto-gpt
DEPLOY_IMAGE_NAME: ${{ secrets.DOCKER_USER }}/auto-gpt
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Log in to Docker hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USER }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
# slashes are not allowed in image tags, but can appear in git branch or tag names
- id: sanitize_tag
name: Sanitize image tag
run: echo tag=${raw_tag//\//-} >> $GITHUB_OUTPUT
env:
raw_tag: ${{ github.ref_name }}
- id: build
name: Build image
uses: docker/build-push-action@v3
with:
build-args: BUILD_TYPE=release
load: true # save to docker images
# push: true # TODO: uncomment when this issue is fixed: https://github.com/moby/buildkit/issues/1555
tags: >
${{ env.IMAGE_NAME }},
${{ env.DEPLOY_IMAGE_NAME }}:latest,
${{ env.DEPLOY_IMAGE_NAME }}:${{ steps.sanitize_tag.outputs.tag }}
# cache layers in GitHub Actions cache to speed up builds
cache-from: ${{ !inputs.no_cache && 'type=gha' || '' }},scope=docker-release
cache-to: type=gha,scope=docker-release,mode=max
- name: Push image to Docker Hub
run: docker push --all-tags ${{ env.DEPLOY_IMAGE_NAME }}
- name: Generate build report
env:
event_name: ${{ github.event_name }}
event_ref: ${{ github.event.ref }}
event_ref_type: ${{ github.event.ref}}
inputs_no_cache: ${{ inputs.no_cache }}
prod_branch: stable
dev_branch: master
repository: ${{ github.repository }}
base_branch: ${{ github.ref_name != 'master' && github.ref_name != 'stable' && 'master' || 'stable' }}
ref_type: ${{ github.ref_type }}
current_ref: ${{ github.ref_name }}
commit_hash: ${{ github.sha }}
source_url: ${{ format('{0}/tree/{1}', github.event.repository.url, github.event.release && github.event.release.tag_name || github.sha) }}
github_context_json: ${{ toJSON(github) }}
job_env_json: ${{ toJSON(env) }}
vars_json: ${{ toJSON(vars) }}
run: .github/workflows/scripts/docker-release-summary.sh >> $GITHUB_STEP_SUMMARY
continue-on-error: true

View File

@@ -0,0 +1,37 @@
name: Docs
on:
push:
branches: [ stable ]
paths:
- 'docs/**'
- 'mkdocs.yml'
- '.github/workflows/documentation.yml'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Python 3
uses: actions/setup-python@v4
with:
python-version: 3.x
- name: Set up workflow cache
uses: actions/cache@v3
with:
key: ${{ github.ref }}
path: .cache
- run: pip install mkdocs-material
- run: mkdocs gh-deploy --force

55
.github/workflows/pr-label.yml vendored Normal file
View File

@@ -0,0 +1,55 @@
name: "Pull Request auto-label"
on:
# So that PRs touching the same files as the push are updated
push:
branches: [ master ]
# So that the `dirtyLabel` is removed if conflicts are resolve
# We recommend `pull_request_target` so that github secrets are available.
# In `pull_request` we wouldn't be able to change labels of fork PRs
pull_request_target:
types: [ opened, synchronize ]
concurrency:
group: ${{ format('pr-label-{0}', github.event.pull_request.number || github.sha) }}
cancel-in-progress: true
jobs:
conflicts:
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- name: Update PRs with conflict labels
uses: eps1lon/actions-label-merge-conflict@releases/2.x
with:
dirtyLabel: "conflicts"
#removeOnDirtyLabel: "PR: ready to ship"
repoToken: "${{ secrets.GITHUB_TOKEN }}"
commentOnDirty: "This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request."
commentOnClean: "Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly."
size:
if: ${{ github.event_name == 'pull_request_target' }}
permissions:
issues: write
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: codelytv/pr-size-labeler@v1
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
xs_label: 'size/xs'
xs_max_size: 2
s_label: 'size/s'
s_max_size: 10
m_label: 'size/m'
m_max_size: 50
l_label: 'size/l'
l_max_size: 200
xl_label: 'size/xl'
message_if_xl: >
This PR exceeds the recommended size of 200 lines.
Please make sure you are NOT addressing multiple issues with one PR.
Note this PR might be rejected due to its size

View File

@@ -0,0 +1,98 @@
#!/bin/bash
meta=$(docker image inspect "$IMAGE_NAME" | jq '.[0]')
head_compare_url=$(sed "s/{base}/$base_branch/; s/{head}/$current_ref/" <<< $compare_url_template)
ref_compare_url=$(sed "s/{base}/$base_branch/; s/{head}/$commit_hash/" <<< $compare_url_template)
EOF=$(dd if=/dev/urandom bs=15 count=1 status=none | base64)
cat << $EOF
# Docker Build summary 🔨
**Source:** branch \`$current_ref\` -> [$repository@\`${commit_hash:0:7}\`]($source_url)
**Build type:** \`$build_type\`
**Image size:** $((`jq -r .Size <<< $meta` / 10**6))MB
## Image details
**Tags:**
$(jq -r '.RepoTags | map("* `\(.)`") | join("\n")' <<< $meta)
<details>
<summary><h3>Layers</h3></summary>
| Age | Size | Created by instruction |
| --------- | ------ | ---------------------- |
$(docker history --no-trunc --format "{{.CreatedSince}}\t{{.Size}}\t\`{{.CreatedBy}}\`\t{{.Comment}}" $IMAGE_NAME \
| grep 'buildkit.dockerfile' `# filter for layers created in this build process`\
| cut -f-3 `# yeet Comment column`\
| sed 's/ ago//' `# fix Layer age`\
| sed 's/ # buildkit//' `# remove buildkit comment from instructions`\
| sed 's/\$/\\$/g' `# escape variable and shell expansions`\
| sed 's/|/\\|/g' `# escape pipes so they don't interfere with column separators`\
| column -t -s$'\t' -o' | ' `# align columns and add separator`\
| sed 's/^/| /; s/$/ |/' `# add table row start and end pipes`)
</details>
<details>
<summary><h3>ENV</h3></summary>
| Variable | Value |
| -------- | -------- |
$(jq -r \
'.Config.Env
| map(
split("=")
| "\(.[0]) | `\(.[1] | gsub("\\s+"; " "))`"
)
| map("| \(.) |")
| .[]' <<< $meta
)
</details>
<details>
<summary>Raw metadata</summary>
\`\`\`JSON
$meta
\`\`\`
</details>
## Build details
**Build trigger:** $push_forced_label $event_name \`$event_ref\`
<details>
<summary><code>github</code> context</summary>
\`\`\`JSON
$github_context_json
\`\`\`
</details>
### Source
**HEAD:** [$repository@\`${commit_hash:0:7}\`]($source_url) on branch [$current_ref]($ref_compare_url)
**Diff with previous HEAD:** $head_compare_url
#### New commits
$(jq -r 'map([
"**Commit [`\(.id[0:7])`](\(.url)) by \(if .author.username then "@"+.author.username else .author.name end):**",
.message,
(if .committer.name != .author.name then "\n> <sub>**Committer:** \(.committer.name) <\(.committer.email)></sub>" else "" end),
"<sub>**Timestamp:** \(.timestamp)</sub>"
] | map("> \(.)\n") | join("")) | join("\n")' <<< $new_commits_json)
### Job environment
#### \`vars\` context:
\`\`\`JSON
$vars_json
\`\`\`
#### \`env\` context:
\`\`\`JSON
$job_env_json
\`\`\`
$EOF

View File

@@ -0,0 +1,85 @@
#!/bin/bash
meta=$(docker image inspect "$IMAGE_NAME" | jq '.[0]')
EOF=$(dd if=/dev/urandom bs=15 count=1 status=none | base64)
cat << $EOF
# Docker Release Build summary 🚀🔨
**Source:** $ref_type \`$current_ref\` -> [$repository@\`${commit_hash:0:7}\`]($source_url)
**Image size:** $((`jq -r .Size <<< $meta` / 10**6))MB
## Image details
**Tags:**
$(jq -r '.RepoTags | map("* `\(.)`") | join("\n")' <<< $meta)
<details>
<summary><h3>Layers</h3></summary>
| Age | Size | Created by instruction |
| --------- | ------ | ---------------------- |
$(docker history --no-trunc --format "{{.CreatedSince}}\t{{.Size}}\t\`{{.CreatedBy}}\`\t{{.Comment}}" $IMAGE_NAME \
| grep 'buildkit.dockerfile' `# filter for layers created in this build process`\
| cut -f-3 `# yeet Comment column`\
| sed 's/ ago//' `# fix Layer age`\
| sed 's/ # buildkit//' `# remove buildkit comment from instructions`\
| sed 's/\$/\\$/g' `# escape variable and shell expansions`\
| sed 's/|/\\|/g' `# escape pipes so they don't interfere with column separators`\
| column -t -s$'\t' -o' | ' `# align columns and add separator`\
| sed 's/^/| /; s/$/ |/' `# add table row start and end pipes`)
</details>
<details>
<summary><h3>ENV</h3></summary>
| Variable | Value |
| -------- | -------- |
$(jq -r \
'.Config.Env
| map(
split("=")
| "\(.[0]) | `\(.[1] | gsub("\\s+"; " "))`"
)
| map("| \(.) |")
| .[]' <<< $meta
)
</details>
<details>
<summary>Raw metadata</summary>
\`\`\`JSON
$meta
\`\`\`
</details>
## Build details
**Build trigger:** $event_name \`$current_ref\`
| Parameter | Value |
| -------------- | ------------ |
| \`no_cache\` | \`$inputs_no_cache\` |
<details>
<summary><code>github</code> context</summary>
\`\`\`JSON
$github_context_json
\`\`\`
</details>
### Job environment
#### \`vars\` context:
\`\`\`JSON
$vars_json
\`\`\`
#### \`env\` context:
\`\`\`JSON
$job_env_json
\`\`\`
$EOF

28
.github/workflows/sponsors_readme.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: Generate Sponsors README
on:
workflow_dispatch:
schedule:
- cron: '0 */12 * * *'
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout 🛎️
uses: actions/checkout@v3
- name: Generate Sponsors 💖
uses: JamesIves/github-sponsors-readme-action@v1
with:
token: ${{ secrets.README_UPDATER_PAT }}
file: 'README.md'
minimum: 2500
maximum: 99999
- name: Deploy to GitHub Pages 🚀
uses: JamesIves/github-pages-deploy-action@v4
with:
branch: master
folder: '.'
token: ${{ secrets.README_UPDATER_PAT }}

162
.gitignore vendored
View File

@@ -1,21 +1,165 @@
scripts/keys.py
scripts/*json
scripts/node_modules/
scripts/__pycache__/keys.cpython-310.pyc
## Original ignores
autogpt/keys.py
autogpt/*json
autogpt/node_modules/
autogpt/__pycache__/keys.cpython-310.pyc
autogpt/auto_gpt_workspace
package-lock.json
*.pyc
auto_gpt_workspace/*
*.mpeg
.env
*venv/*
outputs/*
azure.yaml
ai_settings.yaml
last_run_ai_settings.yaml
.vscode
.idea/*
auto-gpt.json
log.txt
log-ingestion.txt
logs
*.log
*.mp3
mem.sqlite3
# Coverage reports
.coverage
coverage.xml
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
plugins/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
site/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.direnv/
.env
.venv
env/
venv*/
ENV/
env.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
llama-*
vicuna-*
# mac
.DS_Store
openai/
# news
CURRENT_BULLETIN.md

10
.isort.cfg Normal file
View File

@@ -0,0 +1,10 @@
[settings]
profile = black
multi_line_output = 3
include_trailing_comma = true
force_grid_wrap = 0
use_parentheses = true
ensure_newline_before_comments = true
line_length = 88
sections = FUTURE,STDLIB,THIRDPARTY,FIRSTPARTY,LOCALFOLDER
skip = .tox,__pycache__,*.pyc,venv*/*,reports,venv,env,node_modules,.env,.venv,dist

32
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,32 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: check-added-large-files
args: ['--maxkb=500']
- id: check-byte-order-marker
- id: check-case-conflict
- id: check-merge-conflict
- id: check-symlinks
- id: debug-statements
- repo: https://github.com/pycqa/isort
rev: 5.12.0
hooks:
- id: isort
language_version: python3.10
- repo: https://github.com/psf/black
rev: 23.3.0
hooks:
- id: black
language_version: python3.10
- repo: local
hooks:
- id: pytest-check
name: pytest-check
entry: pytest --cov=autogpt --without-integration --without-slow-integration
language: system
pass_filenames: false
always_run: true

71
.sourcery.yaml Normal file
View File

@@ -0,0 +1,71 @@
# 🪄 This is your project's Sourcery configuration file.
# You can use it to get Sourcery working in the way you want, such as
# ignoring specific refactorings, skipping directories in your project,
# or writing custom rules.
# 📚 For a complete reference to this file, see the documentation at
# https://docs.sourcery.ai/Configuration/Project-Settings/
# This file was auto-generated by Sourcery on 2023-02-25 at 21:07.
version: '1' # The schema version of this config file
ignore: # A list of paths or files which Sourcery will ignore.
- .git
- venv
- .venv
- build
- dist
- env
- .env
- .tox
rule_settings:
enable:
- default
- gpsg
disable: [] # A list of rule IDs Sourcery will never suggest.
rule_types:
- refactoring
- suggestion
- comment
python_version: '3.10' # A string specifying the lowest Python version your project supports. Sourcery will not suggest refactorings requiring a higher Python version.
# rules: # A list of custom rules Sourcery will include in its analysis.
# - id: no-print-statements
# description: Do not use print statements in the test directory.
# pattern: print(...)
# language: python
# replacement:
# condition:
# explanation:
# paths:
# include:
# - test
# exclude:
# - conftest.py
# tests: []
# tags: []
# rule_tags: {} # Additional rule tags.
# metrics:
# quality_threshold: 25.0
# github:
# labels: []
# ignore_labels:
# - sourcery-ignore
# request_review: author
# sourcery_branch: sourcery/{base_branch}
# clone_detection:
# min_lines: 3
# min_duplicates: 2
# identical_clones_only: false
# proxy:
# url:
# ssl_certs_file:
# no_ssl_verify: false

9
BULLETIN.md Normal file
View File

@@ -0,0 +1,9 @@
Welcome to Auto-GPT! We'll keep you informed of the latest news and features by printing messages here.
If you don't wish to see this message, you can run Auto-GPT with the --skip-news flag
# INCLUDED COMMAND 'send_tweet' IS DEPRICATED, AND WILL BE REMOVED IN THE NEXT STABLE RELEASE
Base Twitter functionality (and more) is now covered by plugins: https://github.com/Significant-Gravitas/Auto-GPT-Plugins
## Changes to Docker configuration
The workdir has been changed from /home/appuser to /app. Be sure to update any volume mounts accordingly.

39
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,39 @@
# Code of Conduct for Auto-GPT
## 1. Purpose
The purpose of this Code of Conduct is to provide guidelines for contributors to the auto-gpt project on GitHub. We aim to create a positive and inclusive environment where all participants can contribute and collaborate effectively. By participating in this project, you agree to abide by this Code of Conduct.
## 2. Scope
This Code of Conduct applies to all contributors, maintainers, and users of the auto-gpt project. It extends to all project spaces, including but not limited to issues, pull requests, code reviews, comments, and other forms of communication within the project.
## 3. Our Standards
We encourage the following behavior:
* Being respectful and considerate to others
* Actively seeking diverse perspectives
* Providing constructive feedback and assistance
* Demonstrating empathy and understanding
We discourage the following behavior:
* Harassment or discrimination of any kind
* Disrespectful, offensive, or inappropriate language or content
* Personal attacks or insults
* Unwarranted criticism or negativity
## 4. Reporting and Enforcement
If you witness or experience any violations of this Code of Conduct, please report them to the project maintainers by email or other appropriate means. The maintainers will investigate and take appropriate action, which may include warnings, temporary or permanent bans, or other measures as necessary.
Maintainers are responsible for ensuring compliance with this Code of Conduct and may take action to address any violations.
## 5. Acknowledgements
This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org/version/2/0/code_of_conduct.html).
## 6. Contact
If you have any questions or concerns, please contact the project maintainers.

View File

@@ -1,56 +1,148 @@
# Contributing to Auto-GPT
To contribute to this GitHub project, you can follow these steps:
First of all, thank you for considering contributing to our project! We appreciate your time and effort, and we value any contribution, whether it's reporting a bug, suggesting a new feature, or submitting a pull request.
1. Fork the repository you want to contribute to by clicking the "Fork" button on the project page.
This document provides guidelines and best practices to help you contribute effectively.
2. Clone the repository to your local machine using the following command:
## Code of Conduct
```
git clone https://github.com/<YOUR-GITHUB-USERNAME>/Auto-GPT
```
3. Create a new branch for your changes using the following command:
By participating in this project, you agree to abide by our [Code of Conduct]. Please read it to understand the expectations we have for everyone who contributes to this project.
```
git checkout -b "branch-name"
```
4. Make your changes to the code or documentation.
- Example: Improve User Interface or Add Documentation.
[Code of Conduct]: https://significant-gravitas.github.io/Auto-GPT/code-of-conduct.md
## 📢 A Quick Word
Right now we will not be accepting any Contributions that add non-essential commands to Auto-GPT.
5. Add the changes to the staging area using the following command:
```
git add .
However, you absolutely can still add these commands to Auto-GPT in the form of plugins.
Please check out this [template](https://github.com/Significant-Gravitas/Auto-GPT-Plugin-Template).
## Getting Started
1. Fork the repository and clone your fork.
2. Create a new branch for your changes (use a descriptive name, such as `fix-bug-123` or `add-new-feature`).
3. Make your changes in the new branch.
4. Test your changes thoroughly.
5. Commit and push your changes to your fork.
6. Create a pull request following the guidelines in the [Submitting Pull Requests](#submitting-pull-requests) section.
## How to Contribute
### Reporting Bugs
If you find a bug in the project, please create an issue on GitHub with the following information:
- A clear, descriptive title for the issue.
- A description of the problem, including steps to reproduce the issue.
- Any relevant logs, screenshots, or other supporting information.
### Suggesting Enhancements
If you have an idea for a new feature or improvement, please create an issue on GitHub with the following information:
- A clear, descriptive title for the issue.
- A detailed description of the proposed enhancement, including any benefits and potential drawbacks.
- Any relevant examples, mockups, or supporting information.
### Submitting Pull Requests
When submitting a pull request, please ensure that your changes meet the following criteria:
- Your pull request should be atomic and focus on a single change.
- Your pull request should include tests for your change. We automatically enforce this with [CodeCov](https://docs.codecov.com/docs/commit-status)
- You should have thoroughly tested your changes with multiple different prompts.
- You should have considered potential risks and mitigations for your changes.
- You should have documented your changes clearly and comprehensively.
- You should not include any unrelated or "extra" small tweaks or changes.
## Style Guidelines
### Code Formatting
We use the `black` and `isort` code formatters to maintain a consistent coding style across the project. Please ensure that your code is formatted properly before submitting a pull request.
To format your code, run the following commands in the project's root directory:
```bash
python -m black .
python -m isort .
```
6. Commit the changes with a meaningful commit message using the following command:
Or if you have these tools installed globally:
```bash
black .
isort .
```
git commit -m "your commit message"
```
7. Push the changes to your forked repository using the following command:
```
git push origin branch-name
```
8. Go to the GitHub website and navigate to your forked repository.
9. Click the "New pull request" button.
### Pre-Commit Hooks
10. Select the branch you just pushed to and the branch you want to merge into on the original repository.
11. Add a description of your changes and click the "Create pull request" button.
12. Wait for the project maintainer to review your changes and provide feedback.
13. Make any necessary changes based on feedback and repeat steps 5-12 until your changes are accepted and merged into the main project.
14. Once your changes are merged, you can update your forked repository and local copy of the repository with the following commands:
We use pre-commit hooks to ensure that code formatting and other checks are performed automatically before each commit. To set up pre-commit hooks for this project, follow these steps:
Install the pre-commit package using pip:
```bash
pip install pre-commit
```
git fetch upstream
git checkout master
git merge upstream/master
Run the following command in the project's root directory to install the pre-commit hooks:
```bash
pre-commit install
```
Finally, delete the branch you created with the following command:
```
git branch -d branch-name
```
That's it you made it 🐣⭐⭐
Now, the pre-commit hooks will run automatically before each commit, checking your code formatting and other requirements.
If you encounter any issues or have questions, feel free to reach out to the maintainers or open a new issue on GitHub. We're here to help and appreciate your efforts to contribute to the project.
Happy coding, and once again, thank you for your contributions!
Maintainers will look at PR that have no merge conflicts when deciding what to add to the project. Make sure your PR shows up here:
https://github.com/Significant-Gravitas/Auto-GPT/pulls?q=is%3Apr+is%3Aopen+-label%3Aconflicts
## Testing your changes
If you add or change code, make sure the updated code is covered by tests.
To increase coverage if necessary, [write tests using pytest].
For more info on running tests, please refer to ["Running tests"](https://significant-gravitas.github.io/Auto-GPT/testing/).
[write tests using pytest]: https://realpython.com/pytest-python-testing/
### API-dependent tests
To run tests that involve making calls to the OpenAI API, we use VCRpy. It caches known
requests and matching responses in so-called *cassettes*, allowing us to run the tests
in CI without needing actual API access.
When changes cause a test prompt to be generated differently, it will likely miss the
cache and make a request to the API, updating the cassette with the new request+response.
*Be sure to include the updated cassette in your PR!*
When you run Pytest locally:
- If no prompt change: you will not consume API tokens because there are no new OpenAI calls required.
- If the prompt changes in a way that the cassettes are not reusable:
- If no API key, the test fails. It requires a new cassette. So, add an API key to .env.
- If the API key is present, the tests will make a real call to OpenAI.
- If the test ends up being successful, your prompt changes didn't introduce regressions. This is good. Commit your cassettes to your PR.
- If the test is unsuccessful:
- Either: Your change made Auto-GPT less capable, in that case, you have to change your code.
- Or: The test might be poorly written. In that case, you can make suggestions to change the test.
In our CI pipeline, Pytest will use the cassettes and not call paid API providers, so we need your help to record the replays that you break.
### Community Challenges
Challenges are goals we need Auto-GPT to achieve.
To pick the challenge you like, go to the tests/integration/challenges folder and select the areas you would like to work on.
- a challenge is new if level_currently_beaten is None
- a challenge is in progress if level_currently_beaten is greater or equal to 1
- a challenge is beaten if level_currently_beaten = max_level
Here is an example of how to run the memory challenge A and attempt to beat level 3.
pytest -s tests/integration/challenges/memory/test_memory_challenge_a.py --level=3
To beat a challenge, you're not allowed to change anything in the tests folder, you have to add code in the autogpt folder
Challenges use cassettes. Cassettes allow us to replay your runs in our CI pipeline.
Don't hesitate to delete the cassettes associated to the challenge you're working on if you need to. Otherwise it will keep replaying the last run.
Once you've beaten a new level of a challenge, please create a pull request and we will analyze how you changed Auto-GPT to beat the challenge.

View File

@@ -1,7 +1,40 @@
FROM python:3.11-slim
ENV PIP_NO_CACHE_DIR=yes
WORKDIR /app
# 'dev' or 'release' container build
ARG BUILD_TYPE=dev
# Use an official Python base image from the Docker Hub
FROM python:3.10-slim AS autogpt-base
# Install browsers
RUN apt-get update && apt-get install -y \
chromium-driver firefox-esr \
ca-certificates
# Install utilities
RUN apt-get install -y curl jq wget git
# Set environment variables
ENV PIP_NO_CACHE_DIR=yes \
PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1
# Install the required python packages globally
ENV PATH="$PATH:/root/.local/bin"
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY scripts/ .
ENTRYPOINT ["python", "main.py"]
# Set the entrypoint
ENTRYPOINT ["python", "-m", "autogpt"]
# dev build -> include everything
FROM autogpt-base as autogpt-dev
RUN pip install --no-cache-dir -r requirements.txt
WORKDIR /app
ONBUILD COPY . ./
# release build -> include bare minimum
FROM autogpt-base as autogpt-release
RUN sed -i '/Items below this point will not be included in the Docker Image/,$d' requirements.txt && \
pip install --no-cache-dir -r requirements.txt
WORKDIR /app
ONBUILD COPY autogpt/ ./autogpt
FROM autogpt-${BUILD_TYPE} AS auto-gpt

372
README.md

File diff suppressed because one or more lines are too long

View File

@@ -1,7 +0,0 @@
ai_goals:
- Increase net worth.
- Develop and manage multiple businesses autonomously.
- Play to your strengths as a Large Language Model.
ai_name: Entrepreneur-GPT
ai_role: an AI designed to autonomously develop and run businesses with the sole goal
of increasing your net worth.

14
autogpt/__init__.py Normal file
View File

@@ -0,0 +1,14 @@
import os
import random
import sys
from dotenv import load_dotenv
if "pytest" in sys.argv or "pytest" in sys.modules or os.getenv("CI"):
print("Setting random seed to 42")
random.seed(42)
# Load the users .env file into environment variables
load_dotenv(verbose=True, override=True)
del load_dotenv

5
autogpt/__main__.py Normal file
View File

@@ -0,0 +1,5 @@
"""Auto-GPT: A GPT powered AI Assistant"""
import autogpt.cli
if __name__ == "__main__":
autogpt.cli.main()

View File

@@ -0,0 +1,4 @@
from autogpt.agent.agent import Agent
from autogpt.agent.agent_manager import AgentManager
__all__ = ["Agent", "AgentManager"]

290
autogpt/agent/agent.py Normal file
View File

@@ -0,0 +1,290 @@
from colorama import Fore, Style
from autogpt.app import execute_command, get_command
from autogpt.config import Config
from autogpt.json_utils.json_fix_llm import fix_json_using_multiple_techniques
from autogpt.json_utils.utilities import LLM_DEFAULT_RESPONSE_FORMAT, validate_json
from autogpt.llm import chat_with_ai, create_chat_completion, create_chat_message
from autogpt.logs import logger, print_assistant_thoughts
from autogpt.speech import say_text
from autogpt.spinner import Spinner
from autogpt.utils import clean_input
from autogpt.workspace import Workspace
class Agent:
"""Agent class for interacting with Auto-GPT.
Attributes:
ai_name: The name of the agent.
memory: The memory object to use.
full_message_history: The full message history.
next_action_count: The number of actions to execute.
system_prompt: The system prompt is the initial prompt that defines everything
the AI needs to know to achieve its task successfully.
Currently, the dynamic and customizable information in the system prompt are
ai_name, description and goals.
triggering_prompt: The last sentence the AI will see before answering.
For Auto-GPT, this prompt is:
Determine which next command to use, and respond using the format specified
above:
The triggering prompt is not part of the system prompt because between the
system prompt and the triggering
prompt we have contextual information that can distract the AI and make it
forget that its goal is to find the next task to achieve.
SYSTEM PROMPT
CONTEXTUAL INFORMATION (memory, previous conversations, anything relevant)
TRIGGERING PROMPT
The triggering prompt reminds the AI about its short term meta task
(defining the next task)
"""
def __init__(
self,
ai_name,
memory,
full_message_history,
next_action_count,
command_registry,
config,
system_prompt,
triggering_prompt,
workspace_directory,
):
cfg = Config()
self.ai_name = ai_name
self.memory = memory
self.summary_memory = (
"I was created." # Initial memory necessary to avoid hilucination
)
self.last_memory_index = 0
self.full_message_history = full_message_history
self.next_action_count = next_action_count
self.command_registry = command_registry
self.config = config
self.system_prompt = system_prompt
self.triggering_prompt = triggering_prompt
self.workspace = Workspace(workspace_directory, cfg.restrict_to_workspace)
def start_interaction_loop(self):
# Interaction Loop
cfg = Config()
loop_count = 0
command_name = None
arguments = None
user_input = ""
while True:
# Discontinue if continuous limit is reached
loop_count += 1
if (
cfg.continuous_mode
and cfg.continuous_limit > 0
and loop_count > cfg.continuous_limit
):
logger.typewriter_log(
"Continuous Limit Reached: ", Fore.YELLOW, f"{cfg.continuous_limit}"
)
break
# Send message to AI, get response
with Spinner("Thinking... "):
assistant_reply = chat_with_ai(
self,
self.system_prompt,
self.triggering_prompt,
self.full_message_history,
self.memory,
cfg.fast_token_limit,
) # TODO: This hardcodes the model to use GPT3.5. Make this an argument
assistant_reply_json = fix_json_using_multiple_techniques(assistant_reply)
for plugin in cfg.plugins:
if not plugin.can_handle_post_planning():
continue
assistant_reply_json = plugin.post_planning(self, assistant_reply_json)
# Print Assistant thoughts
if assistant_reply_json != {}:
validate_json(assistant_reply_json, LLM_DEFAULT_RESPONSE_FORMAT)
# Get command name and arguments
try:
print_assistant_thoughts(
self.ai_name, assistant_reply_json, cfg.speak_mode
)
command_name, arguments = get_command(assistant_reply_json)
if cfg.speak_mode:
say_text(f"I want to execute {command_name}")
arguments = self._resolve_pathlike_command_args(arguments)
except Exception as e:
logger.error("Error: \n", str(e))
if not cfg.continuous_mode and self.next_action_count == 0:
# ### GET USER AUTHORIZATION TO EXECUTE COMMAND ###
# Get key press: Prompt the user to press enter to continue or escape
# to exit
self.user_input = ""
logger.typewriter_log(
"NEXT ACTION: ",
Fore.CYAN,
f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL} "
f"ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}",
)
logger.info(
"Enter 'y' to authorise command, 'y -N' to run N continuous commands, 's' to run self-feedback commands"
"'n' to exit program, or enter feedback for "
f"{self.ai_name}..."
)
while True:
if cfg.chat_messages_enabled:
console_input = clean_input("Waiting for your response...")
else:
console_input = clean_input(
Fore.MAGENTA + "Input:" + Style.RESET_ALL
)
if console_input.lower().strip() == cfg.authorise_key:
user_input = "GENERATE NEXT COMMAND JSON"
break
elif console_input.lower().strip() == "s":
logger.typewriter_log(
"-=-=-=-=-=-=-= THOUGHTS, REASONING, PLAN AND CRITICISM WILL NOW BE VERIFIED BY AGENT -=-=-=-=-=-=-=",
Fore.GREEN,
"",
)
thoughts = assistant_reply_json.get("thoughts", {})
self_feedback_resp = self.get_self_feedback(
thoughts, cfg.fast_llm_model
)
logger.typewriter_log(
f"SELF FEEDBACK: {self_feedback_resp}",
Fore.YELLOW,
"",
)
if self_feedback_resp[0].lower().strip() == cfg.authorise_key:
user_input = "GENERATE NEXT COMMAND JSON"
else:
user_input = self_feedback_resp
break
elif console_input.lower().strip() == "":
logger.warn("Invalid input format.")
continue
elif console_input.lower().startswith(f"{cfg.authorise_key} -"):
try:
self.next_action_count = abs(
int(console_input.split(" ")[1])
)
user_input = "GENERATE NEXT COMMAND JSON"
except ValueError:
logger.warn(
"Invalid input format. Please enter 'y -n' where n is"
" the number of continuous tasks."
)
continue
break
elif console_input.lower() == cfg.exit_key:
user_input = "EXIT"
break
else:
user_input = console_input
command_name = "human_feedback"
break
if user_input == "GENERATE NEXT COMMAND JSON":
logger.typewriter_log(
"-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=",
Fore.MAGENTA,
"",
)
elif user_input == "EXIT":
logger.info("Exiting...")
break
else:
# Print command
logger.typewriter_log(
"NEXT ACTION: ",
Fore.CYAN,
f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL}"
f" ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}",
)
# Execute command
if command_name is not None and command_name.lower().startswith("error"):
result = (
f"Command {command_name} threw the following error: {arguments}"
)
elif command_name == "human_feedback":
result = f"Human feedback: {user_input}"
else:
for plugin in cfg.plugins:
if not plugin.can_handle_pre_command():
continue
command_name, arguments = plugin.pre_command(
command_name, arguments
)
command_result = execute_command(
self.command_registry,
command_name,
arguments,
self.config.prompt_generator,
)
result = f"Command {command_name} returned: " f"{command_result}"
for plugin in cfg.plugins:
if not plugin.can_handle_post_command():
continue
result = plugin.post_command(command_name, result)
if self.next_action_count > 0:
self.next_action_count -= 1
# Check if there's a result from the command append it to the message
# history
if result is not None:
self.full_message_history.append(create_chat_message("system", result))
logger.typewriter_log("SYSTEM: ", Fore.YELLOW, result)
else:
self.full_message_history.append(
create_chat_message("system", "Unable to execute command")
)
logger.typewriter_log(
"SYSTEM: ", Fore.YELLOW, "Unable to execute command"
)
def _resolve_pathlike_command_args(self, command_args):
if "directory" in command_args and command_args["directory"] in {"", "/"}:
command_args["directory"] = str(self.workspace.root)
else:
for pathlike in ["filename", "directory", "clone_path"]:
if pathlike in command_args:
command_args[pathlike] = str(
self.workspace.get_path(command_args[pathlike])
)
return command_args
def get_self_feedback(self, thoughts: dict, llm_model: str) -> str:
"""Generates a feedback response based on the provided thoughts dictionary.
This method takes in a dictionary of thoughts containing keys such as 'reasoning',
'plan', 'thoughts', and 'criticism'. It combines these elements into a single
feedback message and uses the create_chat_completion() function to generate a
response based on the input message.
Args:
thoughts (dict): A dictionary containing thought elements like reasoning,
plan, thoughts, and criticism.
Returns:
str: A feedback response generated using the provided thoughts dictionary.
"""
ai_role = self.config.ai_role
feedback_prompt = f"Below is a message from an AI agent with the role of {ai_role}. Please review the provided Thought, Reasoning, Plan, and Criticism. If these elements accurately contribute to the successful execution of the assumed role, respond with the letter 'Y' followed by a space, and then explain why it is effective. If the provided information is not suitable for achieving the role's objectives, please provide one or more sentences addressing the issue and suggesting a resolution."
reasoning = thoughts.get("reasoning", "")
plan = thoughts.get("plan", "")
thought = thoughts.get("thoughts", "")
criticism = thoughts.get("criticism", "")
feedback_thoughts = thought + reasoning + plan + criticism
return create_chat_completion(
[{"role": "user", "content": feedback_prompt + feedback_thoughts}],
llm_model,
)

View File

@@ -0,0 +1,145 @@
"""Agent manager for managing GPT agents"""
from __future__ import annotations
from typing import List
from autogpt.config.config import Config
from autogpt.llm import Message, create_chat_completion
from autogpt.singleton import Singleton
class AgentManager(metaclass=Singleton):
"""Agent manager for managing GPT agents"""
def __init__(self):
self.next_key = 0
self.agents = {} # key, (task, full_message_history, model)
self.cfg = Config()
# Create new GPT agent
# TODO: Centralise use of create_chat_completion() to globally enforce token limit
def create_agent(self, task: str, prompt: str, model: str) -> tuple[int, str]:
"""Create a new agent and return its key
Args:
task: The task to perform
prompt: The prompt to use
model: The model to use
Returns:
The key of the new agent
"""
messages: List[Message] = [
{"role": "user", "content": prompt},
]
for plugin in self.cfg.plugins:
if not plugin.can_handle_pre_instruction():
continue
if plugin_messages := plugin.pre_instruction(messages):
messages.extend(iter(plugin_messages))
# Start GPT instance
agent_reply = create_chat_completion(
model=model,
messages=messages,
)
messages.append({"role": "assistant", "content": agent_reply})
plugins_reply = ""
for i, plugin in enumerate(self.cfg.plugins):
if not plugin.can_handle_on_instruction():
continue
if plugin_result := plugin.on_instruction(messages):
sep = "\n" if i else ""
plugins_reply = f"{plugins_reply}{sep}{plugin_result}"
if plugins_reply and plugins_reply != "":
messages.append({"role": "assistant", "content": plugins_reply})
key = self.next_key
# This is done instead of len(agents) to make keys unique even if agents
# are deleted
self.next_key += 1
self.agents[key] = (task, messages, model)
for plugin in self.cfg.plugins:
if not plugin.can_handle_post_instruction():
continue
agent_reply = plugin.post_instruction(agent_reply)
return key, agent_reply
def message_agent(self, key: str | int, message: str) -> str:
"""Send a message to an agent and return its response
Args:
key: The key of the agent to message
message: The message to send to the agent
Returns:
The agent's response
"""
task, messages, model = self.agents[int(key)]
# Add user message to message history before sending to agent
messages.append({"role": "user", "content": message})
for plugin in self.cfg.plugins:
if not plugin.can_handle_pre_instruction():
continue
if plugin_messages := plugin.pre_instruction(messages):
for plugin_message in plugin_messages:
messages.append(plugin_message)
# Start GPT instance
agent_reply = create_chat_completion(
model=model,
messages=messages,
)
messages.append({"role": "assistant", "content": agent_reply})
plugins_reply = agent_reply
for i, plugin in enumerate(self.cfg.plugins):
if not plugin.can_handle_on_instruction():
continue
if plugin_result := plugin.on_instruction(messages):
sep = "\n" if i else ""
plugins_reply = f"{plugins_reply}{sep}{plugin_result}"
# Update full message history
if plugins_reply and plugins_reply != "":
messages.append({"role": "assistant", "content": plugins_reply})
for plugin in self.cfg.plugins:
if not plugin.can_handle_post_instruction():
continue
agent_reply = plugin.post_instruction(agent_reply)
return agent_reply
def list_agents(self) -> list[tuple[str | int, str]]:
"""Return a list of all agents
Returns:
A list of tuples of the form (key, task)
"""
# Return a list of agent keys and their tasks
return [(key, task) for key, (task, _, _) in self.agents.items()]
def delete_agent(self, key: str | int) -> bool:
"""Delete an agent from the agent manager
Args:
key: The key of the agent to delete
Returns:
True if successful, False otherwise
"""
try:
del self.agents[int(key)]
return True
except KeyError:
return False

255
autogpt/app.py Normal file
View File

@@ -0,0 +1,255 @@
""" Command and Control """
import json
from typing import Dict, List, NoReturn, Union
from autogpt.agent.agent_manager import AgentManager
from autogpt.commands.command import CommandRegistry, command
from autogpt.commands.web_requests import scrape_links, scrape_text
from autogpt.config import Config
from autogpt.logs import logger
from autogpt.memory import get_memory
from autogpt.processing.text import summarize_text
from autogpt.prompts.generator import PromptGenerator
from autogpt.speech import say_text
from autogpt.url_utils.validators import validate_url
CFG = Config()
AGENT_MANAGER = AgentManager()
def is_valid_int(value: str) -> bool:
"""Check if the value is a valid integer
Args:
value (str): The value to check
Returns:
bool: True if the value is a valid integer, False otherwise
"""
try:
int(value)
return True
except ValueError:
return False
def get_command(response_json: Dict):
"""Parse the response and return the command name and arguments
Args:
response_json (json): The response from the AI
Returns:
tuple: The command name and arguments
Raises:
json.decoder.JSONDecodeError: If the response is not valid JSON
Exception: If any other error occurs
"""
try:
if "command" not in response_json:
return "Error:", "Missing 'command' object in JSON"
if not isinstance(response_json, dict):
return "Error:", f"'response_json' object is not dictionary {response_json}"
command = response_json["command"]
if not isinstance(command, dict):
return "Error:", "'command' object is not a dictionary"
if "name" not in command:
return "Error:", "Missing 'name' field in 'command' object"
command_name = command["name"]
# Use an empty dictionary if 'args' field is not present in 'command' object
arguments = command.get("args", {})
return command_name, arguments
except json.decoder.JSONDecodeError:
return "Error:", "Invalid JSON"
# All other errors, return "Error: + error message"
except Exception as e:
return "Error:", str(e)
def map_command_synonyms(command_name: str):
"""Takes the original command name given by the AI, and checks if the
string matches a list of common/known hallucinations
"""
synonyms = [
("write_file", "write_to_file"),
("create_file", "write_to_file"),
("search", "google"),
]
for seen_command, actual_command_name in synonyms:
if command_name == seen_command:
return actual_command_name
return command_name
def execute_command(
command_registry: CommandRegistry,
command_name: str,
arguments,
prompt: PromptGenerator,
):
"""Execute the command and return the result
Args:
command_name (str): The name of the command to execute
arguments (dict): The arguments for the command
Returns:
str: The result of the command
"""
try:
cmd = command_registry.commands.get(command_name)
# If the command is found, call it with the provided arguments
if cmd:
return cmd(**arguments)
# TODO: Remove commands below after they are moved to the command registry.
command_name = map_command_synonyms(command_name.lower())
if command_name == "memory_add":
return get_memory(CFG).add(arguments["string"])
# TODO: Change these to take in a file rather than pasted code, if
# non-file is given, return instructions "Input should be a python
# filepath, write your code to file and try again
elif command_name == "task_complete":
shutdown()
else:
for command in prompt.commands:
if (
command_name == command["label"].lower()
or command_name == command["name"].lower()
):
return command["function"](**arguments)
return (
f"Unknown command '{command_name}'. Please refer to the 'COMMANDS'"
" list for available commands and only respond in the specified JSON"
" format."
)
except Exception as e:
return f"Error: {str(e)}"
@command(
"get_text_summary", "Get text summary", '"url": "<url>", "question": "<question>"'
)
@validate_url
def get_text_summary(url: str, question: str) -> str:
"""Return the results of a Google search
Args:
url (str): The url to scrape
question (str): The question to summarize the text for
Returns:
str: The summary of the text
"""
text = scrape_text(url)
summary = summarize_text(url, text, question)
return f""" "Result" : {summary}"""
@command("get_hyperlinks", "Get text summary", '"url": "<url>"')
@validate_url
def get_hyperlinks(url: str) -> Union[str, List[str]]:
"""Return the results of a Google search
Args:
url (str): The url to scrape
Returns:
str or list: The hyperlinks on the page
"""
return scrape_links(url)
def shutdown() -> NoReturn:
"""Shut down the program"""
logger.info("Shutting down...")
quit()
@command(
"start_agent",
"Start GPT Agent",
'"name": "<name>", "task": "<short_task_desc>", "prompt": "<prompt>"',
)
def start_agent(name: str, task: str, prompt: str, model=CFG.fast_llm_model) -> str:
"""Start an agent with a given name, task, and prompt
Args:
name (str): The name of the agent
task (str): The task of the agent
prompt (str): The prompt for the agent
model (str): The model to use for the agent
Returns:
str: The response of the agent
"""
# Remove underscores from name
voice_name = name.replace("_", " ")
first_message = f"""You are {name}. Respond with: "Acknowledged"."""
agent_intro = f"{voice_name} here, Reporting for duty!"
# Create agent
if CFG.speak_mode:
say_text(agent_intro, 1)
key, ack = AGENT_MANAGER.create_agent(task, first_message, model)
if CFG.speak_mode:
say_text(f"Hello {voice_name}. Your task is as follows. {task}.")
# Assign task (prompt), get response
agent_response = AGENT_MANAGER.message_agent(key, prompt)
return f"Agent {name} created with key {key}. First response: {agent_response}"
@command("message_agent", "Message GPT Agent", '"key": "<key>", "message": "<message>"')
def message_agent(key: str, message: str) -> str:
"""Message an agent with a given key and message"""
# Check if the key is a valid integer
if is_valid_int(key):
agent_response = AGENT_MANAGER.message_agent(int(key), message)
else:
return "Invalid key, must be an integer."
# Speak response
if CFG.speak_mode:
say_text(agent_response, 1)
return agent_response
@command("list_agents", "List GPT Agents", "")
def list_agents() -> str:
"""List all agents
Returns:
str: A list of all agents
"""
return "List of agents:\n" + "\n".join(
[str(x[0]) + ": " + x[1] for x in AGENT_MANAGER.list_agents()]
)
@command("delete_agent", "Delete GPT Agent", '"key": "<key>"')
def delete_agent(key: str) -> str:
"""Delete an agent with a given key
Args:
key (str): The key of the agent to delete
Returns:
str: A message indicating whether the agent was deleted or not
"""
result = AGENT_MANAGER.delete_agent(key)
return f"Agent {key} deleted." if result else f"Agent {key} does not exist."

109
autogpt/cli.py Normal file
View File

@@ -0,0 +1,109 @@
"""Main script for the autogpt package."""
import click
@click.group(invoke_without_command=True)
@click.option("-c", "--continuous", is_flag=True, help="Enable Continuous Mode")
@click.option(
"--skip-reprompt",
"-y",
is_flag=True,
help="Skips the re-prompting messages at the beginning of the script",
)
@click.option(
"--ai-settings",
"-C",
help="Specifies which ai_settings.yaml file to use, will also automatically skip the re-prompt.",
)
@click.option(
"-l",
"--continuous-limit",
type=int,
help="Defines the number of times to run in continuous mode",
)
@click.option("--speak", is_flag=True, help="Enable Speak Mode")
@click.option("--debug", is_flag=True, help="Enable Debug Mode")
@click.option("--gpt3only", is_flag=True, help="Enable GPT3.5 Only Mode")
@click.option("--gpt4only", is_flag=True, help="Enable GPT4 Only Mode")
@click.option(
"--use-memory",
"-m",
"memory_type",
type=str,
help="Defines which Memory backend to use",
)
@click.option(
"-b",
"--browser-name",
help="Specifies which web-browser to use when using selenium to scrape the web.",
)
@click.option(
"--allow-downloads",
is_flag=True,
help="Dangerous: Allows Auto-GPT to download files natively.",
)
@click.option(
"--skip-news",
is_flag=True,
help="Specifies whether to suppress the output of latest news on startup.",
)
@click.option(
# TODO: this is a hidden option for now, necessary for integration testing.
# We should make this public once we're ready to roll out agent specific workspaces.
"--workspace-directory",
"-w",
type=click.Path(),
hidden=True,
)
@click.option(
"--install-plugin-deps",
is_flag=True,
help="Installs external dependencies for 3rd party plugins.",
)
@click.pass_context
def main(
ctx: click.Context,
continuous: bool,
continuous_limit: int,
ai_settings: str,
skip_reprompt: bool,
speak: bool,
debug: bool,
gpt3only: bool,
gpt4only: bool,
memory_type: str,
browser_name: str,
allow_downloads: bool,
skip_news: bool,
workspace_directory: str,
install_plugin_deps: bool,
) -> None:
"""
Welcome to AutoGPT an experimental open-source application showcasing the capabilities of the GPT-4 pushing the boundaries of AI.
Start an Auto-GPT assistant.
"""
# Put imports inside function to avoid importing everything when starting the CLI
from autogpt.main import run_auto_gpt
if ctx.invoked_subcommand is None:
run_auto_gpt(
continuous,
continuous_limit,
ai_settings,
skip_reprompt,
speak,
debug,
gpt3only,
gpt4only,
memory_type,
browser_name,
allow_downloads,
skip_news,
workspace_directory,
install_plugin_deps,
)
if __name__ == "__main__":
main()

View File

View File

@@ -0,0 +1,31 @@
"""Code evaluation module."""
from __future__ import annotations
from autogpt.commands.command import command
from autogpt.llm import call_ai_function
@command(
"analyze_code",
"Analyze Code",
'"code": "<full_code_string>"',
)
def analyze_code(code: str) -> list[str]:
"""
A function that takes in a string and returns a response from create chat
completion api call.
Parameters:
code (str): Code to be evaluated.
Returns:
A result string from create chat completion. A list of suggestions to
improve the code.
"""
function_string = "def analyze_code(code: str) -> list[str]:"
args = [code]
description_string = (
"Analyzes the given code and returns a list of suggestions for improvements."
)
return call_ai_function(function_string, args, description_string)

View File

@@ -0,0 +1,61 @@
"""Commands for converting audio to text."""
import json
import requests
from autogpt.commands.command import command
from autogpt.config import Config
CFG = Config()
@command(
"read_audio_from_file",
"Convert Audio to text",
'"filename": "<filename>"',
CFG.huggingface_audio_to_text_model,
"Configure huggingface_audio_to_text_model.",
)
def read_audio_from_file(filename: str) -> str:
"""
Convert audio to text.
Args:
filename (str): The path to the audio file
Returns:
str: The text from the audio
"""
with open(filename, "rb") as audio_file:
audio = audio_file.read()
return read_audio(audio)
def read_audio(audio: bytes) -> str:
"""
Convert audio to text.
Args:
audio (bytes): The audio to convert
Returns:
str: The text from the audio
"""
model = CFG.huggingface_audio_to_text_model
api_url = f"https://api-inference.huggingface.co/models/{model}"
api_token = CFG.huggingface_api_token
headers = {"Authorization": f"Bearer {api_token}"}
if api_token is None:
raise ValueError(
"You need to set your Hugging Face API token in the config file."
)
response = requests.post(
api_url,
headers=headers,
data=audio,
)
text = json.loads(response.content.decode("utf-8"))["text"]
return f"The audio says: {text}"

156
autogpt/commands/command.py Normal file
View File

@@ -0,0 +1,156 @@
import functools
import importlib
import inspect
from typing import Any, Callable, Optional
# Unique identifier for auto-gpt commands
AUTO_GPT_COMMAND_IDENTIFIER = "auto_gpt_command"
class Command:
"""A class representing a command.
Attributes:
name (str): The name of the command.
description (str): A brief description of what the command does.
signature (str): The signature of the function that the command executes. Defaults to None.
"""
def __init__(
self,
name: str,
description: str,
method: Callable[..., Any],
signature: str = "",
enabled: bool = True,
disabled_reason: Optional[str] = None,
):
self.name = name
self.description = description
self.method = method
self.signature = signature if signature else str(inspect.signature(self.method))
self.enabled = enabled
self.disabled_reason = disabled_reason
def __call__(self, *args, **kwargs) -> Any:
if not self.enabled:
return f"Command '{self.name}' is disabled: {self.disabled_reason}"
return self.method(*args, **kwargs)
def __str__(self) -> str:
return f"{self.name}: {self.description}, args: {self.signature}"
class CommandRegistry:
"""
The CommandRegistry class is a manager for a collection of Command objects.
It allows the registration, modification, and retrieval of Command objects,
as well as the scanning and loading of command plugins from a specified
directory.
"""
def __init__(self):
self.commands = {}
def _import_module(self, module_name: str) -> Any:
return importlib.import_module(module_name)
def _reload_module(self, module: Any) -> Any:
return importlib.reload(module)
def register(self, cmd: Command) -> None:
self.commands[cmd.name] = cmd
def unregister(self, command_name: str):
if command_name in self.commands:
del self.commands[command_name]
else:
raise KeyError(f"Command '{command_name}' not found in registry.")
def reload_commands(self) -> None:
"""Reloads all loaded command plugins."""
for cmd_name in self.commands:
cmd = self.commands[cmd_name]
module = self._import_module(cmd.__module__)
reloaded_module = self._reload_module(module)
if hasattr(reloaded_module, "register"):
reloaded_module.register(self)
def get_command(self, name: str) -> Callable[..., Any]:
return self.commands[name]
def call(self, command_name: str, **kwargs) -> Any:
if command_name not in self.commands:
raise KeyError(f"Command '{command_name}' not found in registry.")
command = self.commands[command_name]
return command(**kwargs)
def command_prompt(self) -> str:
"""
Returns a string representation of all registered `Command` objects for use in a prompt
"""
commands_list = [
f"{idx + 1}. {str(cmd)}" for idx, cmd in enumerate(self.commands.values())
]
return "\n".join(commands_list)
def import_commands(self, module_name: str) -> None:
"""
Imports the specified Python module containing command plugins.
This method imports the associated module and registers any functions or
classes that are decorated with the `AUTO_GPT_COMMAND_IDENTIFIER` attribute
as `Command` objects. The registered `Command` objects are then added to the
`commands` dictionary of the `CommandRegistry` object.
Args:
module_name (str): The name of the module to import for command plugins.
"""
module = importlib.import_module(module_name)
for attr_name in dir(module):
attr = getattr(module, attr_name)
# Register decorated functions
if hasattr(attr, AUTO_GPT_COMMAND_IDENTIFIER) and getattr(
attr, AUTO_GPT_COMMAND_IDENTIFIER
):
self.register(attr.command)
# Register command classes
elif (
inspect.isclass(attr) and issubclass(attr, Command) and attr != Command
):
cmd_instance = attr()
self.register(cmd_instance)
def command(
name: str,
description: str,
signature: str = "",
enabled: bool = True,
disabled_reason: Optional[str] = None,
) -> Callable[..., Any]:
"""The command decorator is used to create Command objects from ordinary functions."""
def decorator(func: Callable[..., Any]) -> Command:
cmd = Command(
name=name,
description=description,
method=func,
signature=signature,
enabled=enabled,
disabled_reason=disabled_reason,
)
@functools.wraps(func)
def wrapper(*args, **kwargs) -> Any:
return func(*args, **kwargs)
wrapper.command = cmd
setattr(wrapper, AUTO_GPT_COMMAND_IDENTIFIER, True)
return wrapper
return decorator

View File

@@ -0,0 +1,184 @@
"""Execute code in a Docker container"""
import os
import subprocess
from pathlib import Path
import docker
from docker.errors import ImageNotFound
from autogpt.commands.command import command
from autogpt.config import Config
from autogpt.logs import logger
CFG = Config()
@command("execute_python_file", "Execute Python File", '"filename": "<filename>"')
def execute_python_file(filename: str) -> str:
"""Execute a Python file in a Docker container and return the output
Args:
filename (str): The name of the file to execute
Returns:
str: The output of the file
"""
logger.info(f"Executing file '{filename}'")
if not filename.endswith(".py"):
return "Error: Invalid file type. Only .py files are allowed."
if not os.path.isfile(filename):
return f"Error: File '{filename}' does not exist."
if we_are_running_in_a_docker_container():
result = subprocess.run(
f"python {filename}", capture_output=True, encoding="utf8", shell=True
)
if result.returncode == 0:
return result.stdout
else:
return f"Error: {result.stderr}"
try:
client = docker.from_env()
# You can replace this with the desired Python image/version
# You can find available Python images on Docker Hub:
# https://hub.docker.com/_/python
image_name = "python:3-alpine"
try:
client.images.get(image_name)
logger.warn(f"Image '{image_name}' found locally")
except ImageNotFound:
logger.info(
f"Image '{image_name}' not found locally, pulling from Docker Hub"
)
# Use the low-level API to stream the pull response
low_level_client = docker.APIClient()
for line in low_level_client.pull(image_name, stream=True, decode=True):
# Print the status and progress, if available
status = line.get("status")
progress = line.get("progress")
if status and progress:
logger.info(f"{status}: {progress}")
elif status:
logger.info(status)
container = client.containers.run(
image_name,
f"python {Path(filename).relative_to(CFG.workspace_path)}",
volumes={
CFG.workspace_path: {
"bind": "/workspace",
"mode": "ro",
}
},
working_dir="/workspace",
stderr=True,
stdout=True,
detach=True,
)
container.wait()
logs = container.logs().decode("utf-8")
container.remove()
# print(f"Execution complete. Output: {output}")
# print(f"Logs: {logs}")
return logs
except docker.errors.DockerException as e:
logger.warn(
"Could not run the script in a container. If you haven't already, please install Docker https://docs.docker.com/get-docker/"
)
return f"Error: {str(e)}"
except Exception as e:
return f"Error: {str(e)}"
@command(
"execute_shell",
"Execute Shell Command, non-interactive commands only",
'"command_line": "<command_line>"',
CFG.execute_local_commands,
"You are not allowed to run local shell commands. To execute"
" shell commands, EXECUTE_LOCAL_COMMANDS must be set to 'True' "
"in your config. Do not attempt to bypass the restriction.",
)
def execute_shell(command_line: str) -> str:
"""Execute a shell command and return the output
Args:
command_line (str): The command line to execute
Returns:
str: The output of the command
"""
current_dir = Path.cwd()
# Change dir into workspace if necessary
if not current_dir.is_relative_to(CFG.workspace_path):
os.chdir(CFG.workspace_path)
logger.info(
f"Executing command '{command_line}' in working directory '{os.getcwd()}'"
)
result = subprocess.run(command_line, capture_output=True, shell=True)
output = f"STDOUT:\n{result.stdout}\nSTDERR:\n{result.stderr}"
# Change back to whatever the prior working dir was
os.chdir(current_dir)
return output
@command(
"execute_shell_popen",
"Execute Shell Command, non-interactive commands only",
'"command_line": "<command_line>"',
CFG.execute_local_commands,
"You are not allowed to run local shell commands. To execute"
" shell commands, EXECUTE_LOCAL_COMMANDS must be set to 'True' "
"in your config. Do not attempt to bypass the restriction.",
)
def execute_shell_popen(command_line) -> str:
"""Execute a shell command with Popen and returns an english description
of the event and the process id
Args:
command_line (str): The command line to execute
Returns:
str: Description of the fact that the process started and its id
"""
current_dir = os.getcwd()
# Change dir into workspace if necessary
if CFG.workspace_path not in current_dir:
os.chdir(CFG.workspace_path)
logger.info(
f"Executing command '{command_line}' in working directory '{os.getcwd()}'"
)
do_not_show_output = subprocess.DEVNULL
process = subprocess.Popen(
command_line, shell=True, stdout=do_not_show_output, stderr=do_not_show_output
)
# Change back to whatever the prior working dir was
os.chdir(current_dir)
return f"Subprocess started with PID:'{str(process.pid)}'"
def we_are_running_in_a_docker_container() -> bool:
"""Check if we are running in a Docker container
Returns:
bool: True if we are running in a Docker container, False otherwise
"""
return os.path.exists("/.dockerenv")

View File

@@ -0,0 +1,272 @@
"""File operations for AutoGPT"""
from __future__ import annotations
import os
import os.path
from typing import Generator
import requests
from colorama import Back, Fore
from requests.adapters import HTTPAdapter, Retry
from autogpt.commands.command import command
from autogpt.config import Config
from autogpt.logs import logger
from autogpt.spinner import Spinner
from autogpt.utils import readable_file_size
CFG = Config()
def check_duplicate_operation(operation: str, filename: str) -> bool:
"""Check if the operation has already been performed on the given file
Args:
operation (str): The operation to check for
filename (str): The name of the file to check for
Returns:
bool: True if the operation has already been performed on the file
"""
log_content = read_file(CFG.file_logger_path)
log_entry = f"{operation}: {filename}\n"
return log_entry in log_content
def log_operation(operation: str, filename: str) -> None:
"""Log the file operation to the file_logger.txt
Args:
operation (str): The operation to log
filename (str): The name of the file the operation was performed on
"""
log_entry = f"{operation}: {filename}\n"
append_to_file(CFG.file_logger_path, log_entry, should_log=False)
def split_file(
content: str, max_length: int = 4000, overlap: int = 0
) -> Generator[str, None, None]:
"""
Split text into chunks of a specified maximum length with a specified overlap
between chunks.
:param content: The input text to be split into chunks
:param max_length: The maximum length of each chunk,
default is 4000 (about 1k token)
:param overlap: The number of overlapping characters between chunks,
default is no overlap
:return: A generator yielding chunks of text
"""
start = 0
content_length = len(content)
while start < content_length:
end = start + max_length
if end + overlap < content_length:
chunk = content[start : end + overlap - 1]
else:
chunk = content[start:content_length]
# Account for the case where the last chunk is shorter than the overlap, so it has already been consumed
if len(chunk) <= overlap:
break
yield chunk
start += max_length - overlap
@command("read_file", "Read file", '"filename": "<filename>"')
def read_file(filename: str) -> str:
"""Read a file and return the contents
Args:
filename (str): The name of the file to read
Returns:
str: The contents of the file
"""
try:
with open(filename, "r", encoding="utf-8") as f:
content = f.read()
return content
except Exception as e:
return f"Error: {str(e)}"
def ingest_file(
filename: str, memory, max_length: int = 4000, overlap: int = 200
) -> None:
"""
Ingest a file by reading its content, splitting it into chunks with a specified
maximum length and overlap, and adding the chunks to the memory storage.
:param filename: The name of the file to ingest
:param memory: An object with an add() method to store the chunks in memory
:param max_length: The maximum length of each chunk, default is 4000
:param overlap: The number of overlapping characters between chunks, default is 200
"""
try:
logger.info(f"Working with file {filename}")
content = read_file(filename)
content_length = len(content)
logger.info(f"File length: {content_length} characters")
chunks = list(split_file(content, max_length=max_length, overlap=overlap))
num_chunks = len(chunks)
for i, chunk in enumerate(chunks):
logger.info(f"Ingesting chunk {i + 1} / {num_chunks} into memory")
memory_to_add = (
f"Filename: {filename}\n" f"Content part#{i + 1}/{num_chunks}: {chunk}"
)
memory.add(memory_to_add)
logger.info(f"Done ingesting {num_chunks} chunks from {filename}.")
except Exception as e:
logger.info(f"Error while ingesting file '{filename}': {str(e)}")
@command("write_to_file", "Write to file", '"filename": "<filename>", "text": "<text>"')
def write_to_file(filename: str, text: str) -> str:
"""Write text to a file
Args:
filename (str): The name of the file to write to
text (str): The text to write to the file
Returns:
str: A message indicating success or failure
"""
if check_duplicate_operation("write", filename):
return "Error: File has already been updated."
try:
directory = os.path.dirname(filename)
os.makedirs(directory, exist_ok=True)
with open(filename, "w", encoding="utf-8") as f:
f.write(text)
log_operation("write", filename)
return "File written to successfully."
except Exception as e:
return f"Error: {str(e)}"
@command(
"append_to_file", "Append to file", '"filename": "<filename>", "text": "<text>"'
)
def append_to_file(filename: str, text: str, should_log: bool = True) -> str:
"""Append text to a file
Args:
filename (str): The name of the file to append to
text (str): The text to append to the file
should_log (bool): Should log output
Returns:
str: A message indicating success or failure
"""
try:
directory = os.path.dirname(filename)
os.makedirs(directory, exist_ok=True)
with open(filename, "a") as f:
f.write(text)
if should_log:
log_operation("append", filename)
return "Text appended successfully."
except Exception as e:
return f"Error: {str(e)}"
@command("delete_file", "Delete file", '"filename": "<filename>"')
def delete_file(filename: str) -> str:
"""Delete a file
Args:
filename (str): The name of the file to delete
Returns:
str: A message indicating success or failure
"""
if check_duplicate_operation("delete", filename):
return "Error: File has already been deleted."
try:
os.remove(filename)
log_operation("delete", filename)
return "File deleted successfully."
except Exception as e:
return f"Error: {str(e)}"
@command("search_files", "Search Files", '"directory": "<directory>"')
def search_files(directory: str) -> list[str]:
"""Search for files in a directory
Args:
directory (str): The directory to search in
Returns:
list[str]: A list of files found in the directory
"""
found_files = []
for root, _, files in os.walk(directory):
for file in files:
if file.startswith("."):
continue
relative_path = os.path.relpath(
os.path.join(root, file), CFG.workspace_path
)
found_files.append(relative_path)
return found_files
@command(
"download_file",
"Download File",
'"url": "<url>", "filename": "<filename>"',
CFG.allow_downloads,
"Error: You do not have user authorization to download files locally.",
)
def download_file(url, filename):
"""Downloads a file
Args:
url (str): URL of the file to download
filename (str): Filename to save the file as
"""
try:
directory = os.path.dirname(filename)
os.makedirs(directory, exist_ok=True)
message = f"{Fore.YELLOW}Downloading file from {Back.LIGHTBLUE_EX}{url}{Back.RESET}{Fore.RESET}"
with Spinner(message) as spinner:
session = requests.Session()
retry = Retry(total=3, backoff_factor=1, status_forcelist=[502, 503, 504])
adapter = HTTPAdapter(max_retries=retry)
session.mount("http://", adapter)
session.mount("https://", adapter)
total_size = 0
downloaded_size = 0
with session.get(url, allow_redirects=True, stream=True) as r:
r.raise_for_status()
total_size = int(r.headers.get("Content-Length", 0))
downloaded_size = 0
with open(filename, "wb") as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
downloaded_size += len(chunk)
# Update the progress message
progress = f"{readable_file_size(downloaded_size)} / {readable_file_size(total_size)}"
spinner.update_message(f"{message} {progress}")
return f'Successfully downloaded and locally stored file: "{filename}"! (Size: {readable_file_size(downloaded_size)})'
except requests.HTTPError as e:
return f"Got an HTTP Error whilst trying to download file: {e}"
except Exception as e:
return "Error: " + str(e)

View File

@@ -0,0 +1,35 @@
"""Git operations for autogpt"""
from git.repo import Repo
from autogpt.commands.command import command
from autogpt.config import Config
from autogpt.url_utils.validators import validate_url
CFG = Config()
@command(
"clone_repository",
"Clone Repository",
'"url": "<repository_url>", "clone_path": "<clone_path>"',
CFG.github_username and CFG.github_api_key,
"Configure github_username and github_api_key.",
)
@validate_url
def clone_repository(url: str, clone_path: str) -> str:
"""Clone a GitHub repository locally.
Args:
url (str): The URL of the repository to clone.
clone_path (str): The path to clone the repository to.
Returns:
str: The result of the clone operation.
"""
split_url = url.split("//")
auth_repo_url = f"//{CFG.github_username}:{CFG.github_api_key}@".join(split_url)
try:
Repo.clone_from(url=auth_repo_url, to_path=clone_path)
return f"""Cloned {url} to {clone_path}"""
except Exception as e:
return f"Error: {str(e)}"

View File

@@ -0,0 +1,117 @@
"""Google search command for Autogpt."""
from __future__ import annotations
import json
from duckduckgo_search import ddg
from autogpt.commands.command import command
from autogpt.config import Config
CFG = Config()
@command("google", "Google Search", '"query": "<query>"', not CFG.google_api_key)
def google_search(query: str, num_results: int = 8) -> str:
"""Return the results of a Google search
Args:
query (str): The search query.
num_results (int): The number of results to return.
Returns:
str: The results of the search.
"""
search_results = []
if not query:
return json.dumps(search_results)
results = ddg(query, max_results=num_results)
if not results:
return json.dumps(search_results)
for j in results:
search_results.append(j)
results = json.dumps(search_results, ensure_ascii=False, indent=4)
return safe_google_results(results)
@command(
"google",
"Google Search",
'"query": "<query>"',
bool(CFG.google_api_key),
"Configure google_api_key.",
)
def google_official_search(query: str, num_results: int = 8) -> str | list[str]:
"""Return the results of a Google search using the official Google API
Args:
query (str): The search query.
num_results (int): The number of results to return.
Returns:
str: The results of the search.
"""
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
try:
# Get the Google API key and Custom Search Engine ID from the config file
api_key = CFG.google_api_key
custom_search_engine_id = CFG.custom_search_engine_id
# Initialize the Custom Search API service
service = build("customsearch", "v1", developerKey=api_key)
# Send the search query and retrieve the results
result = (
service.cse()
.list(q=query, cx=custom_search_engine_id, num=num_results)
.execute()
)
# Extract the search result items from the response
search_results = result.get("items", [])
# Create a list of only the URLs from the search results
search_results_links = [item["link"] for item in search_results]
except HttpError as e:
# Handle errors in the API call
error_details = json.loads(e.content.decode())
# Check if the error is related to an invalid or missing API key
if error_details.get("error", {}).get(
"code"
) == 403 and "invalid API key" in error_details.get("error", {}).get(
"message", ""
):
return "Error: The provided Google API key is invalid or missing."
else:
return f"Error: {e}"
# google_result can be a list or a string depending on the search results
# Return the list of search result URLs
return safe_google_results(search_results_links)
def safe_google_results(results: str | list) -> str:
"""
Return the results of a google search in a safe format.
Args:
results (str | list): The search results.
Returns:
str: The results of the search.
"""
if isinstance(results, list):
safe_message = json.dumps(
[result.encode("utf-8", "ignore") for result in results]
)
else:
safe_message = results.encode("utf-8", "ignore").decode("utf-8")
return safe_message

View File

@@ -0,0 +1,165 @@
""" Image Generation Module for AutoGPT."""
import io
import uuid
from base64 import b64decode
import openai
import requests
from PIL import Image
from autogpt.commands.command import command
from autogpt.config import Config
from autogpt.logs import logger
CFG = Config()
@command("generate_image", "Generate Image", '"prompt": "<prompt>"', CFG.image_provider)
def generate_image(prompt: str, size: int = 256) -> str:
"""Generate an image from a prompt.
Args:
prompt (str): The prompt to use
size (int, optional): The size of the image. Defaults to 256. (Not supported by HuggingFace)
Returns:
str: The filename of the image
"""
filename = f"{CFG.workspace_path}/{str(uuid.uuid4())}.jpg"
# DALL-E
if CFG.image_provider == "dalle":
return generate_image_with_dalle(prompt, filename, size)
# HuggingFace
elif CFG.image_provider == "huggingface":
return generate_image_with_hf(prompt, filename)
# SD WebUI
elif CFG.image_provider == "sdwebui":
return generate_image_with_sd_webui(prompt, filename, size)
return "No Image Provider Set"
def generate_image_with_hf(prompt: str, filename: str) -> str:
"""Generate an image with HuggingFace's API.
Args:
prompt (str): The prompt to use
filename (str): The filename to save the image to
Returns:
str: The filename of the image
"""
API_URL = (
f"https://api-inference.huggingface.co/models/{CFG.huggingface_image_model}"
)
if CFG.huggingface_api_token is None:
raise ValueError(
"You need to set your Hugging Face API token in the config file."
)
headers = {
"Authorization": f"Bearer {CFG.huggingface_api_token}",
"X-Use-Cache": "false",
}
response = requests.post(
API_URL,
headers=headers,
json={
"inputs": prompt,
},
)
image = Image.open(io.BytesIO(response.content))
logger.info(f"Image Generated for prompt:{prompt}")
image.save(filename)
return f"Saved to disk:{filename}"
def generate_image_with_dalle(prompt: str, filename: str, size: int) -> str:
"""Generate an image with DALL-E.
Args:
prompt (str): The prompt to use
filename (str): The filename to save the image to
size (int): The size of the image
Returns:
str: The filename of the image
"""
# Check for supported image sizes
if size not in [256, 512, 1024]:
closest = min([256, 512, 1024], key=lambda x: abs(x - size))
logger.info(
f"DALL-E only supports image sizes of 256x256, 512x512, or 1024x1024. Setting to {closest}, was {size}."
)
size = closest
response = openai.Image.create(
prompt=prompt,
n=1,
size=f"{size}x{size}",
response_format="b64_json",
api_key=CFG.openai_api_key,
)
logger.info(f"Image Generated for prompt:{prompt}")
image_data = b64decode(response["data"][0]["b64_json"])
with open(filename, mode="wb") as png:
png.write(image_data)
return f"Saved to disk:{filename}"
def generate_image_with_sd_webui(
prompt: str,
filename: str,
size: int = 512,
negative_prompt: str = "",
extra: dict = {},
) -> str:
"""Generate an image with Stable Diffusion webui.
Args:
prompt (str): The prompt to use
filename (str): The filename to save the image to
size (int, optional): The size of the image. Defaults to 256.
negative_prompt (str, optional): The negative prompt to use. Defaults to "".
extra (dict, optional): Extra parameters to pass to the API. Defaults to {}.
Returns:
str: The filename of the image
"""
# Create a session and set the basic auth if needed
s = requests.Session()
if CFG.sd_webui_auth:
username, password = CFG.sd_webui_auth.split(":")
s.auth = (username, password or "")
# Generate the images
response = requests.post(
f"{CFG.sd_webui_url}/sdapi/v1/txt2img",
json={
"prompt": prompt,
"negative_prompt": negative_prompt,
"sampler_index": "DDIM",
"steps": 20,
"cfg_scale": 7.0,
"width": size,
"height": size,
"n_iter": 1,
**extra,
},
)
logger.info(f"Image Generated for prompt:{prompt}")
# Save the image to disk
response = response.json()
b64 = b64decode(response["images"][0].split(",", 1)[0])
image = Image.open(io.BytesIO(b64))
image.save(filename)
return f"Saved to disk:{filename}"

View File

@@ -0,0 +1,35 @@
from __future__ import annotations
import json
from autogpt.commands.command import command
from autogpt.llm import call_ai_function
@command(
"improve_code",
"Get Improved Code",
'"suggestions": "<list_of_suggestions>", "code": "<full_code_string>"',
)
def improve_code(suggestions: list[str], code: str) -> str:
"""
A function that takes in code and suggestions and returns a response from create
chat completion api call.
Parameters:
suggestions (list): A list of suggestions around what needs to be improved.
code (str): Code to be improved.
Returns:
A result string from create chat completion. Improved code in response.
"""
function_string = (
"def generate_improved_code(suggestions: list[str], code: str) -> str:"
)
args = [json.dumps(suggestions), code]
description_string = (
"Improves the provided code based on the suggestions"
" provided, making no other changes."
)
return call_ai_function(function_string, args, description_string)

10
autogpt/commands/times.py Normal file
View File

@@ -0,0 +1,10 @@
from datetime import datetime
def get_datetime() -> str:
"""Return the current date and time
Returns:
str: The current date and time
"""
return "Current date and time: " + datetime.now().strftime("%Y-%m-%d %H:%M:%S")

View File

@@ -0,0 +1,41 @@
"""A module that contains a command to send a tweet."""
import os
import tweepy
from autogpt.commands.command import command
@command(
"send_tweet",
"Send Tweet",
'"tweet_text": "<tweet_text>"',
)
def send_tweet(tweet_text: str) -> str:
"""
A function that takes in a string and returns a response from create chat
completion api call.
Args:
tweet_text (str): Text to be tweeted.
Returns:
A result from sending the tweet.
"""
consumer_key = os.environ.get("TW_CONSUMER_KEY")
consumer_secret = os.environ.get("TW_CONSUMER_SECRET")
access_token = os.environ.get("TW_ACCESS_TOKEN")
access_token_secret = os.environ.get("TW_ACCESS_TOKEN_SECRET")
# Authenticate to Twitter
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
# Create API object
api = tweepy.API(auth)
# Send tweet
try:
api.update_status(tweet_text)
return "Tweet sent successfully!"
except tweepy.TweepyException as e:
return f"Error sending tweet: {e.reason}"

View File

@@ -0,0 +1,82 @@
"""Web scraping commands using Playwright"""
from __future__ import annotations
from autogpt.logs import logger
try:
from playwright.sync_api import sync_playwright
except ImportError:
logger.info(
"Playwright not installed. Please install it with 'pip install playwright' to use."
)
from bs4 import BeautifulSoup
from autogpt.processing.html import extract_hyperlinks, format_hyperlinks
def scrape_text(url: str) -> str:
"""Scrape text from a webpage
Args:
url (str): The URL to scrape text from
Returns:
str: The scraped text
"""
with sync_playwright() as p:
browser = p.chromium.launch()
page = browser.new_page()
try:
page.goto(url)
html_content = page.content()
soup = BeautifulSoup(html_content, "html.parser")
for script in soup(["script", "style"]):
script.extract()
text = soup.get_text()
lines = (line.strip() for line in text.splitlines())
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
text = "\n".join(chunk for chunk in chunks if chunk)
except Exception as e:
text = f"Error: {str(e)}"
finally:
browser.close()
return text
def scrape_links(url: str) -> str | list[str]:
"""Scrape links from a webpage
Args:
url (str): The URL to scrape links from
Returns:
Union[str, List[str]]: The scraped links
"""
with sync_playwright() as p:
browser = p.chromium.launch()
page = browser.new_page()
try:
page.goto(url)
html_content = page.content()
soup = BeautifulSoup(html_content, "html.parser")
for script in soup(["script", "style"]):
script.extract()
hyperlinks = extract_hyperlinks(soup, url)
formatted_links = format_hyperlinks(hyperlinks)
except Exception as e:
formatted_links = f"Error: {str(e)}"
finally:
browser.close()
return formatted_links

View File

@@ -0,0 +1,112 @@
"""Browse a webpage and summarize it using the LLM model"""
from __future__ import annotations
import requests
from bs4 import BeautifulSoup
from requests import Response
from autogpt.config import Config
from autogpt.processing.html import extract_hyperlinks, format_hyperlinks
from autogpt.url_utils.validators import validate_url
CFG = Config()
session = requests.Session()
session.headers.update({"User-Agent": CFG.user_agent})
@validate_url
def get_response(
url: str, timeout: int = 10
) -> tuple[None, str] | tuple[Response, None]:
"""Get the response from a URL
Args:
url (str): The URL to get the response from
timeout (int): The timeout for the HTTP request
Returns:
tuple[None, str] | tuple[Response, None]: The response and error message
Raises:
ValueError: If the URL is invalid
requests.exceptions.RequestException: If the HTTP request fails
"""
try:
response = session.get(url, timeout=timeout)
# Check if the response contains an HTTP error
if response.status_code >= 400:
return None, f"Error: HTTP {str(response.status_code)} error"
return response, None
except ValueError as ve:
# Handle invalid URL format
return None, f"Error: {str(ve)}"
except requests.exceptions.RequestException as re:
# Handle exceptions related to the HTTP request
# (e.g., connection errors, timeouts, etc.)
return None, f"Error: {str(re)}"
def scrape_text(url: str) -> str:
"""Scrape text from a webpage
Args:
url (str): The URL to scrape text from
Returns:
str: The scraped text
"""
response, error_message = get_response(url)
if error_message:
return error_message
if not response:
return "Error: Could not get response"
soup = BeautifulSoup(response.text, "html.parser")
for script in soup(["script", "style"]):
script.extract()
text = soup.get_text()
lines = (line.strip() for line in text.splitlines())
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
text = "\n".join(chunk for chunk in chunks if chunk)
return text
def scrape_links(url: str) -> str | list[str]:
"""Scrape links from a webpage
Args:
url (str): The URL to scrape links from
Returns:
str | list[str]: The scraped links
"""
response, error_message = get_response(url)
if error_message:
return error_message
if not response:
return "Error: Could not get response"
soup = BeautifulSoup(response.text, "html.parser")
for script in soup(["script", "style"]):
script.extract()
hyperlinks = extract_hyperlinks(soup, url)
return format_hyperlinks(hyperlinks)
def create_message(chunk, question):
"""Create a message for the user to summarize a chunk of text"""
return {
"role": "user",
"content": f'"""{chunk}""" Using the above text, answer the following'
f' question: "{question}" -- if the question cannot be answered using the'
" text, summarize the text.",
}

View File

@@ -0,0 +1,178 @@
"""Selenium web scraping module."""
from __future__ import annotations
import logging
from pathlib import Path
from sys import platform
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.common.exceptions import WebDriverException
from selenium.webdriver.chrome.options import Options as ChromeOptions
from selenium.webdriver.common.by import By
from selenium.webdriver.firefox.options import Options as FirefoxOptions
from selenium.webdriver.remote.webdriver import WebDriver
from selenium.webdriver.safari.options import Options as SafariOptions
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
from webdriver_manager.chrome import ChromeDriverManager
from webdriver_manager.firefox import GeckoDriverManager
import autogpt.processing.text as summary
from autogpt.commands.command import command
from autogpt.config import Config
from autogpt.processing.html import extract_hyperlinks, format_hyperlinks
from autogpt.url_utils.validators import validate_url
FILE_DIR = Path(__file__).parent.parent
CFG = Config()
@command(
"browse_website",
"Browse Website",
'"url": "<url>", "question": "<what_you_want_to_find_on_website>"',
)
@validate_url
def browse_website(url: str, question: str) -> tuple[str, WebDriver]:
"""Browse a website and return the answer and links to the user
Args:
url (str): The url of the website to browse
question (str): The question asked by the user
Returns:
Tuple[str, WebDriver]: The answer and links to the user and the webdriver
"""
try:
driver, text = scrape_text_with_selenium(url)
except WebDriverException as e:
# These errors are often quite long and include lots of context.
# Just grab the first line.
msg = e.msg.split("\n")[0]
return f"Error: {msg}", None
add_header(driver)
summary_text = summary.summarize_text(url, text, question, driver)
links = scrape_links_with_selenium(driver, url)
# Limit links to 5
if len(links) > 5:
links = links[:5]
close_browser(driver)
return f"Answer gathered from website: {summary_text} \n \n Links: {links}", driver
def scrape_text_with_selenium(url: str) -> tuple[WebDriver, str]:
"""Scrape text from a website using selenium
Args:
url (str): The url of the website to scrape
Returns:
Tuple[WebDriver, str]: The webdriver and the text scraped from the website
"""
logging.getLogger("selenium").setLevel(logging.CRITICAL)
options_available = {
"chrome": ChromeOptions,
"safari": SafariOptions,
"firefox": FirefoxOptions,
}
options = options_available[CFG.selenium_web_browser]()
options.add_argument(
"user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.5615.49 Safari/537.36"
)
if CFG.selenium_web_browser == "firefox":
if CFG.selenium_headless:
options.headless = True
options.add_argument("--disable-gpu")
driver = webdriver.Firefox(
executable_path=GeckoDriverManager().install(), options=options
)
elif CFG.selenium_web_browser == "safari":
# Requires a bit more setup on the users end
# See https://developer.apple.com/documentation/webkit/testing_with_webdriver_in_safari
driver = webdriver.Safari(options=options)
else:
if platform == "linux" or platform == "linux2":
options.add_argument("--disable-dev-shm-usage")
options.add_argument("--remote-debugging-port=9222")
options.add_argument("--no-sandbox")
if CFG.selenium_headless:
options.add_argument("--headless=new")
options.add_argument("--disable-gpu")
chromium_driver_path = Path("/usr/bin/chromedriver")
driver = webdriver.Chrome(
executable_path=chromium_driver_path
if chromium_driver_path.exists()
else ChromeDriverManager().install(),
options=options,
)
driver.get(url)
WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.TAG_NAME, "body"))
)
# Get the HTML content directly from the browser's DOM
page_source = driver.execute_script("return document.body.outerHTML;")
soup = BeautifulSoup(page_source, "html.parser")
for script in soup(["script", "style"]):
script.extract()
text = soup.get_text()
lines = (line.strip() for line in text.splitlines())
chunks = (phrase.strip() for line in lines for phrase in line.split(" "))
text = "\n".join(chunk for chunk in chunks if chunk)
return driver, text
def scrape_links_with_selenium(driver: WebDriver, url: str) -> list[str]:
"""Scrape links from a website using selenium
Args:
driver (WebDriver): The webdriver to use to scrape the links
Returns:
List[str]: The links scraped from the website
"""
page_source = driver.page_source
soup = BeautifulSoup(page_source, "html.parser")
for script in soup(["script", "style"]):
script.extract()
hyperlinks = extract_hyperlinks(soup, url)
return format_hyperlinks(hyperlinks)
def close_browser(driver: WebDriver) -> None:
"""Close the browser
Args:
driver (WebDriver): The webdriver to close
Returns:
None
"""
driver.quit()
def add_header(driver: WebDriver) -> None:
"""Add a header to the website
Args:
driver (WebDriver): The webdriver to use to add the header
Returns:
None
"""
driver.execute_script(open(f"{FILE_DIR}/js/overlay.js", "r").read())

View File

@@ -0,0 +1,37 @@
"""A module that contains a function to generate test cases for the submitted code."""
from __future__ import annotations
import json
from autogpt.commands.command import command
from autogpt.llm import call_ai_function
@command(
"write_tests",
"Write Tests",
'"code": "<full_code_string>", "focus": "<list_of_focus_areas>"',
)
def write_tests(code: str, focus: list[str]) -> str:
"""
A function that takes in code and focus topics and returns a response from create
chat completion api call.
Parameters:
focus (list): A list of suggestions around what needs to be improved.
code (str): Code for test cases to be generated against.
Returns:
A result string from create chat completion. Test cases for the submitted code
in response.
"""
function_string = (
"def create_test_cases(code: str, focus: Optional[str] = None) -> str:"
)
args = [code, json.dumps(focus)]
description_string = (
"Generates test cases for the existing code, focusing on"
" specific areas if required."
)
return call_ai_function(function_string, args, description_string)

View File

@@ -0,0 +1,11 @@
"""
This module contains the configuration classes for AutoGPT.
"""
from autogpt.config.ai_config import AIConfig
from autogpt.config.config import Config, check_openai_api_key
__all__ = [
"check_openai_api_key",
"AIConfig",
"Config",
]

168
autogpt/config/ai_config.py Normal file
View File

@@ -0,0 +1,168 @@
# sourcery skip: do-not-use-staticmethod
"""
A module that contains the AIConfig class object that contains the configuration
"""
from __future__ import annotations
import os
import platform
from pathlib import Path
from typing import Any, Optional, Type
import distro
import yaml
from autogpt.prompts.generator import PromptGenerator
# Soon this will go in a folder where it remembers more stuff about the run(s)
SAVE_FILE = str(Path(os.getcwd()) / "ai_settings.yaml")
class AIConfig:
"""
A class object that contains the configuration information for the AI
Attributes:
ai_name (str): The name of the AI.
ai_role (str): The description of the AI's role.
ai_goals (list): The list of objectives the AI is supposed to complete.
api_budget (float): The maximum dollar value for API calls (0.0 means infinite)
"""
def __init__(
self,
ai_name: str = "",
ai_role: str = "",
ai_goals: list | None = None,
api_budget: float = 0.0,
) -> None:
"""
Initialize a class instance
Parameters:
ai_name (str): The name of the AI.
ai_role (str): The description of the AI's role.
ai_goals (list): The list of objectives the AI is supposed to complete.
api_budget (float): The maximum dollar value for API calls (0.0 means infinite)
Returns:
None
"""
if ai_goals is None:
ai_goals = []
self.ai_name = ai_name
self.ai_role = ai_role
self.ai_goals = ai_goals
self.api_budget = api_budget
self.prompt_generator = None
self.command_registry = None
@staticmethod
def load(config_file: str = SAVE_FILE) -> "AIConfig":
"""
Returns class object with parameters (ai_name, ai_role, ai_goals, api_budget) loaded from
yaml file if yaml file exists,
else returns class with no parameters.
Parameters:
config_file (int): The path to the config yaml file.
DEFAULT: "../ai_settings.yaml"
Returns:
cls (object): An instance of given cls object
"""
try:
with open(config_file, encoding="utf-8") as file:
config_params = yaml.load(file, Loader=yaml.FullLoader)
except FileNotFoundError:
config_params = {}
ai_name = config_params.get("ai_name", "")
ai_role = config_params.get("ai_role", "")
ai_goals = [
str(goal).strip("{}").replace("'", "").replace('"', "")
if isinstance(goal, dict)
else str(goal)
for goal in config_params.get("ai_goals", [])
]
api_budget = config_params.get("api_budget", 0.0)
# type: Type[AIConfig]
return AIConfig(ai_name, ai_role, ai_goals, api_budget)
def save(self, config_file: str = SAVE_FILE) -> None:
"""
Saves the class parameters to the specified file yaml file path as a yaml file.
Parameters:
config_file(str): The path to the config yaml file.
DEFAULT: "../ai_settings.yaml"
Returns:
None
"""
config = {
"ai_name": self.ai_name,
"ai_role": self.ai_role,
"ai_goals": self.ai_goals,
"api_budget": self.api_budget,
}
with open(config_file, "w", encoding="utf-8") as file:
yaml.dump(config, file, allow_unicode=True)
def construct_full_prompt(
self, prompt_generator: Optional[PromptGenerator] = None
) -> str:
"""
Returns a prompt to the user with the class information in an organized fashion.
Parameters:
None
Returns:
full_prompt (str): A string containing the initial prompt for the user
including the ai_name, ai_role, ai_goals, and api_budget.
"""
prompt_start = (
"Your decisions must always be made independently without"
" seeking user assistance. Play to your strengths as an LLM and pursue"
" simple strategies with no legal complications."
""
)
from autogpt.config import Config
from autogpt.prompts.prompt import build_default_prompt_generator
cfg = Config()
if prompt_generator is None:
prompt_generator = build_default_prompt_generator()
prompt_generator.goals = self.ai_goals
prompt_generator.name = self.ai_name
prompt_generator.role = self.ai_role
prompt_generator.command_registry = self.command_registry
for plugin in cfg.plugins:
if not plugin.can_handle_post_prompt():
continue
prompt_generator = plugin.post_prompt(prompt_generator)
if cfg.execute_local_commands:
# add OS info to prompt
os_name = platform.system()
os_info = (
platform.platform(terse=True)
if os_name != "Linux"
else distro.name(pretty=True)
)
prompt_start += f"\nThe OS you are running on is: {os_info}"
# Construct full prompt
full_prompt = f"You are {prompt_generator.name}, {prompt_generator.role}\n{prompt_start}\n\nGOALS:\n\n"
for i, goal in enumerate(self.ai_goals):
full_prompt += f"{i+1}. {goal}\n"
if self.api_budget > 0.0:
full_prompt += f"\nIt takes money to let you run. Your API budget is ${self.api_budget:.3f}"
self.prompt_generator = prompt_generator
full_prompt += f"\n\n{prompt_generator.generate_prompt_string()}"
return full_prompt

282
autogpt/config/config.py Normal file
View File

@@ -0,0 +1,282 @@
"""Configuration class to store the state of bools for different scripts access."""
import os
from typing import List
import openai
import yaml
from auto_gpt_plugin_template import AutoGPTPluginTemplate
from colorama import Fore
from autogpt.singleton import Singleton
class Config(metaclass=Singleton):
"""
Configuration class to store the state of bools for different scripts access.
"""
def __init__(self) -> None:
"""Initialize the Config class"""
self.workspace_path = None
self.file_logger_path = None
self.debug_mode = False
self.continuous_mode = False
self.continuous_limit = 0
self.speak_mode = False
self.skip_reprompt = False
self.allow_downloads = False
self.skip_news = False
self.authorise_key = os.getenv("AUTHORISE_COMMAND_KEY", "y")
self.exit_key = os.getenv("EXIT_KEY", "n")
self.ai_settings_file = os.getenv("AI_SETTINGS_FILE", "ai_settings.yaml")
self.fast_llm_model = os.getenv("FAST_LLM_MODEL", "gpt-3.5-turbo")
self.smart_llm_model = os.getenv("SMART_LLM_MODEL", "gpt-4")
self.fast_token_limit = int(os.getenv("FAST_TOKEN_LIMIT", 4000))
self.smart_token_limit = int(os.getenv("SMART_TOKEN_LIMIT", 8000))
self.browse_chunk_max_length = int(os.getenv("BROWSE_CHUNK_MAX_LENGTH", 3000))
self.browse_spacy_language_model = os.getenv(
"BROWSE_SPACY_LANGUAGE_MODEL", "en_core_web_sm"
)
self.openai_api_key = os.getenv("OPENAI_API_KEY")
self.temperature = float(os.getenv("TEMPERATURE", "0"))
self.use_azure = os.getenv("USE_AZURE") == "True"
self.execute_local_commands = (
os.getenv("EXECUTE_LOCAL_COMMANDS", "False") == "True"
)
self.restrict_to_workspace = (
os.getenv("RESTRICT_TO_WORKSPACE", "True") == "True"
)
if self.use_azure:
self.load_azure_config()
openai.api_type = self.openai_api_type
openai.api_base = self.openai_api_base
openai.api_version = self.openai_api_version
self.elevenlabs_api_key = os.getenv("ELEVENLABS_API_KEY")
self.elevenlabs_voice_1_id = os.getenv("ELEVENLABS_VOICE_1_ID")
self.elevenlabs_voice_2_id = os.getenv("ELEVENLABS_VOICE_2_ID")
self.use_mac_os_tts = False
self.use_mac_os_tts = os.getenv("USE_MAC_OS_TTS")
self.chat_messages_enabled = os.getenv("CHAT_MESSAGES_ENABLED") == "True"
self.use_brian_tts = False
self.use_brian_tts = os.getenv("USE_BRIAN_TTS")
self.github_api_key = os.getenv("GITHUB_API_KEY")
self.github_username = os.getenv("GITHUB_USERNAME")
self.google_api_key = os.getenv("GOOGLE_API_KEY")
self.custom_search_engine_id = os.getenv("CUSTOM_SEARCH_ENGINE_ID")
self.pinecone_api_key = os.getenv("PINECONE_API_KEY")
self.pinecone_region = os.getenv("PINECONE_ENV")
self.weaviate_host = os.getenv("WEAVIATE_HOST")
self.weaviate_port = os.getenv("WEAVIATE_PORT")
self.weaviate_protocol = os.getenv("WEAVIATE_PROTOCOL", "http")
self.weaviate_username = os.getenv("WEAVIATE_USERNAME", None)
self.weaviate_password = os.getenv("WEAVIATE_PASSWORD", None)
self.weaviate_scopes = os.getenv("WEAVIATE_SCOPES", None)
self.weaviate_embedded_path = os.getenv("WEAVIATE_EMBEDDED_PATH")
self.weaviate_api_key = os.getenv("WEAVIATE_API_KEY", None)
self.use_weaviate_embedded = (
os.getenv("USE_WEAVIATE_EMBEDDED", "False") == "True"
)
# milvus or zilliz cloud configuration.
self.milvus_addr = os.getenv("MILVUS_ADDR", "localhost:19530")
self.milvus_username = os.getenv("MILVUS_USERNAME")
self.milvus_password = os.getenv("MILVUS_PASSWORD")
self.milvus_collection = os.getenv("MILVUS_COLLECTION", "autogpt")
self.milvus_secure = os.getenv("MILVUS_SECURE") == "True"
self.image_provider = os.getenv("IMAGE_PROVIDER")
self.image_size = int(os.getenv("IMAGE_SIZE", 256))
self.huggingface_api_token = os.getenv("HUGGINGFACE_API_TOKEN")
self.huggingface_image_model = os.getenv(
"HUGGINGFACE_IMAGE_MODEL", "CompVis/stable-diffusion-v1-4"
)
self.huggingface_audio_to_text_model = os.getenv(
"HUGGINGFACE_AUDIO_TO_TEXT_MODEL"
)
self.sd_webui_url = os.getenv("SD_WEBUI_URL", "http://localhost:7860")
self.sd_webui_auth = os.getenv("SD_WEBUI_AUTH")
# Selenium browser settings
self.selenium_web_browser = os.getenv("USE_WEB_BROWSER", "chrome")
self.selenium_headless = os.getenv("HEADLESS_BROWSER", "True") == "True"
# User agent header to use when making HTTP requests
# Some websites might just completely deny request with an error code if
# no user agent was found.
self.user_agent = os.getenv(
"USER_AGENT",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36"
" (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36",
)
self.redis_host = os.getenv("REDIS_HOST", "localhost")
self.redis_port = os.getenv("REDIS_PORT", "6379")
self.redis_password = os.getenv("REDIS_PASSWORD", "")
self.wipe_redis_on_start = os.getenv("WIPE_REDIS_ON_START", "True") == "True"
self.memory_index = os.getenv("MEMORY_INDEX", "auto-gpt")
# Note that indexes must be created on db 0 in redis, this is not configurable.
self.memory_backend = os.getenv("MEMORY_BACKEND", "local")
self.plugins_dir = os.getenv("PLUGINS_DIR", "plugins")
self.plugins: List[AutoGPTPluginTemplate] = []
self.plugins_openai = []
plugins_allowlist = os.getenv("ALLOWLISTED_PLUGINS")
if plugins_allowlist:
self.plugins_allowlist = plugins_allowlist.split(",")
else:
self.plugins_allowlist = []
self.plugins_denylist = []
def get_azure_deployment_id_for_model(self, model: str) -> str:
"""
Returns the relevant deployment id for the model specified.
Parameters:
model(str): The model to map to the deployment id.
Returns:
The matching deployment id if found, otherwise an empty string.
"""
if model == self.fast_llm_model:
return self.azure_model_to_deployment_id_map[
"fast_llm_model_deployment_id"
] # type: ignore
elif model == self.smart_llm_model:
return self.azure_model_to_deployment_id_map[
"smart_llm_model_deployment_id"
] # type: ignore
elif model == "text-embedding-ada-002":
return self.azure_model_to_deployment_id_map[
"embedding_model_deployment_id"
] # type: ignore
else:
return ""
AZURE_CONFIG_FILE = os.path.join(os.path.dirname(__file__), "../..", "azure.yaml")
def load_azure_config(self, config_file: str = AZURE_CONFIG_FILE) -> None:
"""
Loads the configuration parameters for Azure hosting from the specified file
path as a yaml file.
Parameters:
config_file(str): The path to the config yaml file. DEFAULT: "../azure.yaml"
Returns:
None
"""
with open(config_file) as file:
config_params = yaml.load(file, Loader=yaml.FullLoader)
self.openai_api_type = config_params.get("azure_api_type") or "azure"
self.openai_api_base = config_params.get("azure_api_base") or ""
self.openai_api_version = (
config_params.get("azure_api_version") or "2023-03-15-preview"
)
self.azure_model_to_deployment_id_map = config_params.get("azure_model_map", {})
def set_continuous_mode(self, value: bool) -> None:
"""Set the continuous mode value."""
self.continuous_mode = value
def set_continuous_limit(self, value: int) -> None:
"""Set the continuous limit value."""
self.continuous_limit = value
def set_speak_mode(self, value: bool) -> None:
"""Set the speak mode value."""
self.speak_mode = value
def set_fast_llm_model(self, value: str) -> None:
"""Set the fast LLM model value."""
self.fast_llm_model = value
def set_smart_llm_model(self, value: str) -> None:
"""Set the smart LLM model value."""
self.smart_llm_model = value
def set_fast_token_limit(self, value: int) -> None:
"""Set the fast token limit value."""
self.fast_token_limit = value
def set_smart_token_limit(self, value: int) -> None:
"""Set the smart token limit value."""
self.smart_token_limit = value
def set_browse_chunk_max_length(self, value: int) -> None:
"""Set the browse_website command chunk max length value."""
self.browse_chunk_max_length = value
def set_openai_api_key(self, value: str) -> None:
"""Set the OpenAI API key value."""
self.openai_api_key = value
def set_elevenlabs_api_key(self, value: str) -> None:
"""Set the ElevenLabs API key value."""
self.elevenlabs_api_key = value
def set_elevenlabs_voice_1_id(self, value: str) -> None:
"""Set the ElevenLabs Voice 1 ID value."""
self.elevenlabs_voice_1_id = value
def set_elevenlabs_voice_2_id(self, value: str) -> None:
"""Set the ElevenLabs Voice 2 ID value."""
self.elevenlabs_voice_2_id = value
def set_google_api_key(self, value: str) -> None:
"""Set the Google API key value."""
self.google_api_key = value
def set_custom_search_engine_id(self, value: str) -> None:
"""Set the custom search engine id value."""
self.custom_search_engine_id = value
def set_pinecone_api_key(self, value: str) -> None:
"""Set the Pinecone API key value."""
self.pinecone_api_key = value
def set_pinecone_region(self, value: str) -> None:
"""Set the Pinecone region value."""
self.pinecone_region = value
def set_debug_mode(self, value: bool) -> None:
"""Set the debug mode value."""
self.debug_mode = value
def set_plugins(self, value: list) -> None:
"""Set the plugins value."""
self.plugins = value
def set_temperature(self, value: int) -> None:
"""Set the temperature value."""
self.temperature = value
def set_memory_backend(self, name: str) -> None:
"""Set the memory backend name."""
self.memory_backend = name
def check_openai_api_key() -> None:
"""Check if the OpenAI API key is set in config.py or as an environment variable."""
cfg = Config()
if not cfg.openai_api_key:
print(
Fore.RED
+ "Please set your OpenAI API key in .env or as an environment variable."
+ Fore.RESET
)
print("You can get your key from https://platform.openai.com/account/api-keys")
exit(1)

134
autogpt/configurator.py Normal file
View File

@@ -0,0 +1,134 @@
"""Configurator module."""
import click
from colorama import Back, Fore, Style
from autogpt import utils
from autogpt.config import Config
from autogpt.logs import logger
from autogpt.memory import get_supported_memory_backends
CFG = Config()
def create_config(
continuous: bool,
continuous_limit: int,
ai_settings_file: str,
skip_reprompt: bool,
speak: bool,
debug: bool,
gpt3only: bool,
gpt4only: bool,
memory_type: str,
browser_name: str,
allow_downloads: bool,
skip_news: bool,
) -> None:
"""Updates the config object with the given arguments.
Args:
continuous (bool): Whether to run in continuous mode
continuous_limit (int): The number of times to run in continuous mode
ai_settings_file (str): The path to the ai_settings.yaml file
skip_reprompt (bool): Whether to skip the re-prompting messages at the beginning of the script
speak (bool): Whether to enable speak mode
debug (bool): Whether to enable debug mode
gpt3only (bool): Whether to enable GPT3.5 only mode
gpt4only (bool): Whether to enable GPT4 only mode
memory_type (str): The type of memory backend to use
browser_name (str): The name of the browser to use when using selenium to scrape the web
allow_downloads (bool): Whether to allow Auto-GPT to download files natively
skips_news (bool): Whether to suppress the output of latest news on startup
"""
CFG.set_debug_mode(False)
CFG.set_continuous_mode(False)
CFG.set_speak_mode(False)
if debug:
logger.typewriter_log("Debug Mode: ", Fore.GREEN, "ENABLED")
CFG.set_debug_mode(True)
if continuous:
logger.typewriter_log("Continuous Mode: ", Fore.RED, "ENABLED")
logger.typewriter_log(
"WARNING: ",
Fore.RED,
"Continuous mode is not recommended. It is potentially dangerous and may"
" cause your AI to run forever or carry out actions you would not usually"
" authorise. Use at your own risk.",
)
CFG.set_continuous_mode(True)
if continuous_limit:
logger.typewriter_log(
"Continuous Limit: ", Fore.GREEN, f"{continuous_limit}"
)
CFG.set_continuous_limit(continuous_limit)
# Check if continuous limit is used without continuous mode
if continuous_limit and not continuous:
raise click.UsageError("--continuous-limit can only be used with --continuous")
if speak:
logger.typewriter_log("Speak Mode: ", Fore.GREEN, "ENABLED")
CFG.set_speak_mode(True)
if gpt3only:
logger.typewriter_log("GPT3.5 Only Mode: ", Fore.GREEN, "ENABLED")
CFG.set_smart_llm_model(CFG.fast_llm_model)
if gpt4only:
logger.typewriter_log("GPT4 Only Mode: ", Fore.GREEN, "ENABLED")
CFG.set_fast_llm_model(CFG.smart_llm_model)
if memory_type:
supported_memory = get_supported_memory_backends()
chosen = memory_type
if chosen not in supported_memory:
logger.typewriter_log(
"ONLY THE FOLLOWING MEMORY BACKENDS ARE SUPPORTED: ",
Fore.RED,
f"{supported_memory}",
)
logger.typewriter_log("Defaulting to: ", Fore.YELLOW, CFG.memory_backend)
else:
CFG.memory_backend = chosen
if skip_reprompt:
logger.typewriter_log("Skip Re-prompt: ", Fore.GREEN, "ENABLED")
CFG.skip_reprompt = True
if ai_settings_file:
file = ai_settings_file
# Validate file
(validated, message) = utils.validate_yaml_file(file)
if not validated:
logger.typewriter_log("FAILED FILE VALIDATION", Fore.RED, message)
logger.double_check()
exit(1)
logger.typewriter_log("Using AI Settings File:", Fore.GREEN, file)
CFG.ai_settings_file = file
CFG.skip_reprompt = True
if browser_name:
CFG.selenium_web_browser = browser_name
if allow_downloads:
logger.typewriter_log("Native Downloading:", Fore.GREEN, "ENABLED")
logger.typewriter_log(
"WARNING: ",
Fore.YELLOW,
f"{Back.LIGHTYELLOW_EX}Auto-GPT will now be able to download and save files to your machine.{Back.RESET} "
+ "It is recommended that you monitor any files it downloads carefully.",
)
logger.typewriter_log(
"WARNING: ",
Fore.YELLOW,
f"{Back.RED + Style.BRIGHT}ALWAYS REMEMBER TO NEVER OPEN FILES YOU AREN'T SURE OF!{Style.RESET_ALL}",
)
CFG.allow_downloads = True
if skip_news:
CFG.skip_news = True

29
autogpt/js/overlay.js Normal file
View File

@@ -0,0 +1,29 @@
const overlay = document.createElement('div');
Object.assign(overlay.style, {
position: 'fixed',
zIndex: 999999,
top: 0,
left: 0,
width: '100%',
height: '100%',
background: 'rgba(0, 0, 0, 0.7)',
color: '#fff',
fontSize: '24px',
fontWeight: 'bold',
display: 'flex',
justifyContent: 'center',
alignItems: 'center',
});
const textContent = document.createElement('div');
Object.assign(textContent.style, {
textAlign: 'center',
});
textContent.textContent = 'AutoGPT Analyzing Page';
overlay.appendChild(textContent);
document.body.append(overlay);
document.body.style.overflow = 'hidden';
let dotCount = 0;
setInterval(() => {
textContent.textContent = 'AutoGPT Analyzing Page' + '.'.repeat(dotCount);
dotCount = (dotCount + 1) % 4;
}, 1000);

View File

View File

@@ -0,0 +1,121 @@
"""This module contains functions to fix JSON strings using general programmatic approaches, suitable for addressing
common JSON formatting issues."""
from __future__ import annotations
import contextlib
import json
import re
from typing import Optional
from autogpt.config import Config
from autogpt.json_utils.utilities import extract_char_position
from autogpt.logs import logger
CFG = Config()
def fix_invalid_escape(json_to_load: str, error_message: str) -> str:
"""Fix invalid escape sequences in JSON strings.
Args:
json_to_load (str): The JSON string.
error_message (str): The error message from the JSONDecodeError
exception.
Returns:
str: The JSON string with invalid escape sequences fixed.
"""
while error_message.startswith("Invalid \\escape"):
bad_escape_location = extract_char_position(error_message)
json_to_load = (
json_to_load[:bad_escape_location] + json_to_load[bad_escape_location + 1 :]
)
try:
json.loads(json_to_load)
return json_to_load
except json.JSONDecodeError as e:
logger.debug("json loads error - fix invalid escape", e)
error_message = str(e)
return json_to_load
def balance_braces(json_string: str) -> Optional[str]:
"""
Balance the braces in a JSON string.
Args:
json_string (str): The JSON string.
Returns:
str: The JSON string with braces balanced.
"""
open_braces_count = json_string.count("{")
close_braces_count = json_string.count("}")
while open_braces_count > close_braces_count:
json_string += "}"
close_braces_count += 1
while close_braces_count > open_braces_count:
json_string = json_string.rstrip("}")
close_braces_count -= 1
with contextlib.suppress(json.JSONDecodeError):
json.loads(json_string)
return json_string
def add_quotes_to_property_names(json_string: str) -> str:
"""
Add quotes to property names in a JSON string.
Args:
json_string (str): The JSON string.
Returns:
str: The JSON string with quotes added to property names.
"""
def replace_func(match: re.Match) -> str:
return f'"{match[1]}":'
property_name_pattern = re.compile(r"(\w+):")
corrected_json_string = property_name_pattern.sub(replace_func, json_string)
try:
json.loads(corrected_json_string)
return corrected_json_string
except json.JSONDecodeError as e:
raise e
def correct_json(json_to_load: str) -> str:
"""
Correct common JSON errors.
Args:
json_to_load (str): The JSON string.
"""
try:
logger.debug("json", json_to_load)
json.loads(json_to_load)
return json_to_load
except json.JSONDecodeError as e:
logger.debug("json loads error", e)
error_message = str(e)
if error_message.startswith("Invalid \\escape"):
json_to_load = fix_invalid_escape(json_to_load, error_message)
if error_message.startswith(
"Expecting property name enclosed in double quotes"
):
json_to_load = add_quotes_to_property_names(json_to_load)
try:
json.loads(json_to_load)
return json_to_load
except json.JSONDecodeError as e:
logger.debug("json loads error - add quotes", e)
error_message = str(e)
if balanced_str := balance_braces(json_to_load):
return balanced_str
return json_to_load

View File

@@ -0,0 +1,239 @@
"""This module contains functions to fix JSON strings generated by LLM models, such as ChatGPT, using the assistance
of the ChatGPT API or LLM models."""
from __future__ import annotations
import contextlib
import json
from typing import Any, Dict
from colorama import Fore
from regex import regex
from autogpt.config import Config
from autogpt.json_utils.json_fix_general import correct_json
from autogpt.llm import call_ai_function
from autogpt.logs import logger
from autogpt.speech import say_text
JSON_SCHEMA = """
{
"command": {
"name": "command name",
"args": {
"arg name": "value"
}
},
"thoughts":
{
"text": "thought",
"reasoning": "reasoning",
"plan": "- short bulleted\n- list that conveys\n- long-term plan",
"criticism": "constructive self-criticism",
"speak": "thoughts summary to say to user"
}
}
"""
CFG = Config()
def auto_fix_json(json_string: str, schema: str) -> str:
"""Fix the given JSON string to make it parseable and fully compliant with
the provided schema using GPT-3.
Args:
json_string (str): The JSON string to fix.
schema (str): The schema to use to fix the JSON.
Returns:
str: The fixed JSON string.
"""
# Try to fix the JSON using GPT:
function_string = "def fix_json(json_string: str, schema:str=None) -> str:"
args = [f"'''{json_string}'''", f"'''{schema}'''"]
description_string = (
"This function takes a JSON string and ensures that it"
" is parseable and fully compliant with the provided schema. If an object"
" or field specified in the schema isn't contained within the correct JSON,"
" it is omitted. The function also escapes any double quotes within JSON"
" string values to ensure that they are valid. If the JSON string contains"
" any None or NaN values, they are replaced with null before being parsed."
)
# If it doesn't already start with a "`", add one:
if not json_string.startswith("`"):
json_string = "```json\n" + json_string + "\n```"
result_string = call_ai_function(
function_string, args, description_string, model=CFG.fast_llm_model
)
logger.debug("------------ JSON FIX ATTEMPT ---------------")
logger.debug(f"Original JSON: {json_string}")
logger.debug("-----------")
logger.debug(f"Fixed JSON: {result_string}")
logger.debug("----------- END OF FIX ATTEMPT ----------------")
try:
json.loads(result_string) # just check the validity
return result_string
except json.JSONDecodeError: # noqa: E722
# Get the call stack:
# import traceback
# call_stack = traceback.format_exc()
# print(f"Failed to fix JSON: '{json_string}' "+call_stack)
return "failed"
def fix_json_using_multiple_techniques(assistant_reply: str) -> Dict[Any, Any]:
"""Fix the given JSON string to make it parseable and fully compliant with two techniques.
Args:
json_string (str): The JSON string to fix.
Returns:
str: The fixed JSON string.
"""
assistant_reply = assistant_reply.strip()
if assistant_reply.startswith("```json"):
assistant_reply = assistant_reply[7:]
if assistant_reply.endswith("```"):
assistant_reply = assistant_reply[:-3]
try:
return json.loads(assistant_reply) # just check the validity
except json.JSONDecodeError: # noqa: E722
pass
if assistant_reply.startswith("json "):
assistant_reply = assistant_reply[5:]
assistant_reply = assistant_reply.strip()
try:
return json.loads(assistant_reply) # just check the validity
except json.JSONDecodeError: # noqa: E722
pass
# Parse and print Assistant response
assistant_reply_json = fix_and_parse_json(assistant_reply)
logger.debug("Assistant reply JSON: %s", str(assistant_reply_json))
if assistant_reply_json == {}:
assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets(
assistant_reply
)
logger.debug("Assistant reply JSON 2: %s", str(assistant_reply_json))
if assistant_reply_json != {}:
return assistant_reply_json
logger.error(
"Error: The following AI output couldn't be converted to a JSON:\n",
assistant_reply,
)
if CFG.speak_mode:
say_text("I have received an invalid JSON response from the OpenAI API.")
return {}
def fix_and_parse_json(
json_to_load: str, try_to_fix_with_gpt: bool = True
) -> Dict[Any, Any]:
"""Fix and parse JSON string
Args:
json_to_load (str): The JSON string.
try_to_fix_with_gpt (bool, optional): Try to fix the JSON with GPT.
Defaults to True.
Returns:
str or dict[Any, Any]: The parsed JSON.
"""
with contextlib.suppress(json.JSONDecodeError):
json_to_load = json_to_load.replace("\t", "")
return json.loads(json_to_load)
with contextlib.suppress(json.JSONDecodeError):
json_to_load = correct_json(json_to_load)
return json.loads(json_to_load)
# Let's do something manually:
# sometimes GPT responds with something BEFORE the braces:
# "I'm sorry, I don't understand. Please try again."
# {"text": "I'm sorry, I don't understand. Please try again.",
# "confidence": 0.0}
# So let's try to find the first brace and then parse the rest
# of the string
try:
brace_index = json_to_load.index("{")
maybe_fixed_json = json_to_load[brace_index:]
last_brace_index = maybe_fixed_json.rindex("}")
maybe_fixed_json = maybe_fixed_json[: last_brace_index + 1]
return json.loads(maybe_fixed_json)
except (json.JSONDecodeError, ValueError) as e:
return try_ai_fix(try_to_fix_with_gpt, e, json_to_load)
def try_ai_fix(
try_to_fix_with_gpt: bool, exception: Exception, json_to_load: str
) -> Dict[Any, Any]:
"""Try to fix the JSON with the AI
Args:
try_to_fix_with_gpt (bool): Whether to try to fix the JSON with the AI.
exception (Exception): The exception that was raised.
json_to_load (str): The JSON string to load.
Raises:
exception: If try_to_fix_with_gpt is False.
Returns:
str or dict[Any, Any]: The JSON string or dictionary.
"""
if not try_to_fix_with_gpt:
raise exception
if CFG.debug_mode:
logger.warn(
"Warning: Failed to parse AI output, attempting to fix."
"\n If you see this warning frequently, it's likely that"
" your prompt is confusing the AI. Try changing it up"
" slightly."
)
# Now try to fix this up using the ai_functions
ai_fixed_json = auto_fix_json(json_to_load, JSON_SCHEMA)
if ai_fixed_json != "failed":
return json.loads(ai_fixed_json)
# This allows the AI to react to the error message,
# which usually results in it correcting its ways.
# logger.error("Failed to fix AI output, telling the AI.")
return {}
def attempt_to_fix_json_by_finding_outermost_brackets(json_string: str):
if CFG.speak_mode and CFG.debug_mode:
say_text(
"I have received an invalid JSON response from the OpenAI API. "
"Trying to fix it now."
)
logger.error("Attempting to fix JSON by finding outermost brackets\n")
try:
json_pattern = regex.compile(r"\{(?:[^{}]|(?R))*\}")
json_match = json_pattern.search(json_string)
if json_match:
# Extract the valid JSON object from the string
json_string = json_match.group(0)
logger.typewriter_log(
title="Apparently json was fixed.", title_color=Fore.GREEN
)
if CFG.speak_mode and CFG.debug_mode:
say_text("Apparently json was fixed.")
else:
return {}
except (json.JSONDecodeError, ValueError):
if CFG.debug_mode:
logger.error(f"Error: Invalid JSON: {json_string}\n")
if CFG.speak_mode:
say_text("Didn't work. I will have to ignore this response then.")
logger.error("Error: Invalid JSON, setting it to empty JSON now.\n")
json_string = {}
return fix_and_parse_json(json_string)

View File

@@ -0,0 +1,31 @@
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"thoughts": {
"type": "object",
"properties": {
"text": {"type": "string"},
"reasoning": {"type": "string"},
"plan": {"type": "string"},
"criticism": {"type": "string"},
"speak": {"type": "string"}
},
"required": ["text", "reasoning", "plan", "criticism", "speak"],
"additionalProperties": false
},
"command": {
"type": "object",
"properties": {
"name": {"type": "string"},
"args": {
"type": "object"
}
},
"required": ["name", "args"],
"additionalProperties": false
}
},
"required": ["thoughts", "command"],
"additionalProperties": false
}

View File

@@ -0,0 +1,79 @@
"""Utilities for the json_fixes package."""
import json
import re
from jsonschema import Draft7Validator
from autogpt.config import Config
from autogpt.logs import logger
CFG = Config()
LLM_DEFAULT_RESPONSE_FORMAT = "llm_response_format_1"
def extract_char_position(error_message: str) -> int:
"""Extract the character position from the JSONDecodeError message.
Args:
error_message (str): The error message from the JSONDecodeError
exception.
Returns:
int: The character position.
"""
char_pattern = re.compile(r"\(char (\d+)\)")
if match := char_pattern.search(error_message):
return int(match[1])
else:
raise ValueError("Character position not found in the error message.")
def validate_json(json_object: object, schema_name: str) -> dict | None:
"""
:type schema_name: object
:param schema_name: str
:type json_object: object
"""
with open(f"autogpt/json_utils/{schema_name}.json", "r") as f:
schema = json.load(f)
validator = Draft7Validator(schema)
if errors := sorted(validator.iter_errors(json_object), key=lambda e: e.path):
logger.error("The JSON object is invalid.")
if CFG.debug_mode:
logger.error(
json.dumps(json_object, indent=4)
) # Replace 'json_object' with the variable containing the JSON data
logger.error("The following issues were found:")
for error in errors:
logger.error(f"Error: {error.message}")
else:
logger.debug("The JSON object is valid.")
return json_object
def validate_json_string(json_string: str, schema_name: str) -> dict | None:
"""
:type schema_name: object
:param schema_name: str
:type json_object: object
"""
try:
json_loaded = json.loads(json_string)
return validate_json(json_loaded, schema_name)
except:
return None
def is_string_valid_json(json_string: str, schema_name: str) -> bool:
"""
:type schema_name: object
:param schema_name: str
:type json_object: object
"""
return validate_json_string(json_string, schema_name) is not None

38
autogpt/llm/__init__.py Normal file
View File

@@ -0,0 +1,38 @@
from autogpt.llm.api_manager import ApiManager
from autogpt.llm.base import (
ChatModelInfo,
ChatModelResponse,
EmbeddingModelInfo,
EmbeddingModelResponse,
LLMResponse,
Message,
ModelInfo,
)
from autogpt.llm.chat import chat_with_ai, create_chat_message, generate_context
from autogpt.llm.llm_utils import (
call_ai_function,
create_chat_completion,
get_ada_embedding,
)
from autogpt.llm.modelsinfo import COSTS
from autogpt.llm.token_counter import count_message_tokens, count_string_tokens
__all__ = [
"ApiManager",
"Message",
"ModelInfo",
"ChatModelInfo",
"EmbeddingModelInfo",
"LLMResponse",
"ChatModelResponse",
"EmbeddingModelResponse",
"create_chat_message",
"generate_context",
"chat_with_ai",
"call_ai_function",
"create_chat_completion",
"get_ada_embedding",
"COSTS",
"count_message_tokens",
"count_string_tokens",
]

128
autogpt/llm/api_manager.py Normal file
View File

@@ -0,0 +1,128 @@
from __future__ import annotations
import openai
from autogpt.config import Config
from autogpt.llm.modelsinfo import COSTS
from autogpt.logs import logger
from autogpt.singleton import Singleton
class ApiManager(metaclass=Singleton):
def __init__(self):
self.total_prompt_tokens = 0
self.total_completion_tokens = 0
self.total_cost = 0
self.total_budget = 0
def reset(self):
self.total_prompt_tokens = 0
self.total_completion_tokens = 0
self.total_cost = 0
self.total_budget = 0.0
def create_chat_completion(
self,
messages: list, # type: ignore
model: str | None = None,
temperature: float = None,
max_tokens: int | None = None,
deployment_id=None,
) -> str:
"""
Create a chat completion and update the cost.
Args:
messages (list): The list of messages to send to the API.
model (str): The model to use for the API call.
temperature (float): The temperature to use for the API call.
max_tokens (int): The maximum number of tokens for the API call.
Returns:
str: The AI's response.
"""
cfg = Config()
if temperature is None:
temperature = cfg.temperature
if deployment_id is not None:
response = openai.ChatCompletion.create(
deployment_id=deployment_id,
model=model,
messages=messages,
temperature=temperature,
max_tokens=max_tokens,
api_key=cfg.openai_api_key,
)
else:
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temperature,
max_tokens=max_tokens,
api_key=cfg.openai_api_key,
)
logger.debug(f"Response: {response}")
prompt_tokens = response.usage.prompt_tokens
completion_tokens = response.usage.completion_tokens
self.update_cost(prompt_tokens, completion_tokens, model)
return response
def update_cost(self, prompt_tokens, completion_tokens, model):
"""
Update the total cost, prompt tokens, and completion tokens.
Args:
prompt_tokens (int): The number of tokens used in the prompt.
completion_tokens (int): The number of tokens used in the completion.
model (str): The model used for the API call.
"""
self.total_prompt_tokens += prompt_tokens
self.total_completion_tokens += completion_tokens
self.total_cost += (
prompt_tokens * COSTS[model]["prompt"]
+ completion_tokens * COSTS[model]["completion"]
) / 1000
logger.debug(f"Total running cost: ${self.total_cost:.3f}")
def set_total_budget(self, total_budget):
"""
Sets the total user-defined budget for API calls.
Args:
total_budget (float): The total budget for API calls.
"""
self.total_budget = total_budget
def get_total_prompt_tokens(self):
"""
Get the total number of prompt tokens.
Returns:
int: The total number of prompt tokens.
"""
return self.total_prompt_tokens
def get_total_completion_tokens(self):
"""
Get the total number of completion tokens.
Returns:
int: The total number of completion tokens.
"""
return self.total_completion_tokens
def get_total_cost(self):
"""
Get the total cost of API calls.
Returns:
float: The total cost of API calls.
"""
return self.total_cost
def get_total_budget(self):
"""
Get the total user-defined budget for API calls.
Returns:
float: The total budget for API calls.
"""
return self.total_budget

65
autogpt/llm/base.py Normal file
View File

@@ -0,0 +1,65 @@
from dataclasses import dataclass, field
from typing import List, TypedDict
class Message(TypedDict):
"""OpenAI Message object containing a role and the message content"""
role: str
content: str
@dataclass
class ModelInfo:
"""Struct for model information.
Would be lovely to eventually get this directly from APIs, but needs to be scraped from
websites for now.
"""
name: str
prompt_token_cost: float
completion_token_cost: float
max_tokens: int
@dataclass
class ChatModelInfo(ModelInfo):
"""Struct for chat model information."""
pass
@dataclass
class EmbeddingModelInfo(ModelInfo):
"""Struct for embedding model information."""
embedding_dimensions: int
@dataclass
class LLMResponse:
"""Standard response struct for a response from an LLM model."""
model_info: ModelInfo
prompt_tokens_used: int = 0
completion_tokens_used: int = 0
@dataclass
class EmbeddingModelResponse(LLMResponse):
"""Standard response struct for a response from an embedding model."""
embedding: List[float] = field(default_factory=list)
def __post_init__(self):
if self.completion_tokens_used:
raise ValueError("Embeddings should not have completion tokens used.")
@dataclass
class ChatModelResponse(LLMResponse):
"""Standard response struct for a response from an LLM model."""
content: str = None

253
autogpt/llm/chat.py Normal file
View File

@@ -0,0 +1,253 @@
import time
from random import shuffle
from openai.error import RateLimitError
from autogpt.config import Config
from autogpt.llm.api_manager import ApiManager
from autogpt.llm.base import Message
from autogpt.llm.llm_utils import create_chat_completion
from autogpt.llm.token_counter import count_message_tokens
from autogpt.logs import logger
from autogpt.memory_management.store_memory import (
save_memory_trimmed_from_context_window,
)
from autogpt.memory_management.summary_memory import (
get_newly_trimmed_messages,
update_running_summary,
)
cfg = Config()
def create_chat_message(role, content) -> Message:
"""
Create a chat message with the given role and content.
Args:
role (str): The role of the message sender, e.g., "system", "user", or "assistant".
content (str): The content of the message.
Returns:
dict: A dictionary containing the role and content of the message.
"""
return {"role": role, "content": content}
def generate_context(prompt, relevant_memory, full_message_history, model):
current_context = [
create_chat_message("system", prompt),
create_chat_message(
"system", f"The current time and date is {time.strftime('%c')}"
),
# create_chat_message(
# "system",
# f"This reminds you of these events from your past:\n{relevant_memory}\n\n",
# ),
]
# Add messages from the full message history until we reach the token limit
next_message_to_add_index = len(full_message_history) - 1
insertion_index = len(current_context)
# Count the currently used tokens
current_tokens_used = count_message_tokens(current_context, model)
return (
next_message_to_add_index,
current_tokens_used,
insertion_index,
current_context,
)
# TODO: Change debug from hardcode to argument
def chat_with_ai(
agent, prompt, user_input, full_message_history, permanent_memory, token_limit
):
"""Interact with the OpenAI API, sending the prompt, user input, message history,
and permanent memory."""
while True:
try:
"""
Interact with the OpenAI API, sending the prompt, user input,
message history, and permanent memory.
Args:
prompt (str): The prompt explaining the rules to the AI.
user_input (str): The input from the user.
full_message_history (list): The list of all messages sent between the
user and the AI.
permanent_memory (Obj): The memory object containing the permanent
memory.
token_limit (int): The maximum number of tokens allowed in the API call.
Returns:
str: The AI's response.
"""
model = cfg.fast_llm_model # TODO: Change model from hardcode to argument
# Reserve 1000 tokens for the response
logger.debug(f"Token limit: {token_limit}")
send_token_limit = token_limit - 1000
# if len(full_message_history) == 0:
# relevant_memory = ""
# else:
# recent_history = full_message_history[-5:]
# shuffle(recent_history)
# relevant_memories = permanent_memory.get_relevant(
# str(recent_history), 5
# )
# if relevant_memories:
# shuffle(relevant_memories)
# relevant_memory = str(relevant_memories)
relevant_memory = ""
logger.debug(f"Memory Stats: {permanent_memory.get_stats()}")
(
next_message_to_add_index,
current_tokens_used,
insertion_index,
current_context,
) = generate_context(prompt, relevant_memory, full_message_history, model)
# while current_tokens_used > 2500:
# # remove memories until we are under 2500 tokens
# relevant_memory = relevant_memory[:-1]
# (
# next_message_to_add_index,
# current_tokens_used,
# insertion_index,
# current_context,
# ) = generate_context(
# prompt, relevant_memory, full_message_history, model
# )
current_tokens_used += count_message_tokens(
[create_chat_message("user", user_input)], model
) # Account for user input (appended later)
current_tokens_used += 500 # Account for memory (appended later) TODO: The final memory may be less than 500 tokens
# Add Messages until the token limit is reached or there are no more messages to add.
while next_message_to_add_index >= 0:
# print (f"CURRENT TOKENS USED: {current_tokens_used}")
message_to_add = full_message_history[next_message_to_add_index]
tokens_to_add = count_message_tokens([message_to_add], model)
if current_tokens_used + tokens_to_add > send_token_limit:
# save_memory_trimmed_from_context_window(
# full_message_history,
# next_message_to_add_index,
# permanent_memory,
# )
break
# Add the most recent message to the start of the current context,
# after the two system prompts.
current_context.insert(
insertion_index, full_message_history[next_message_to_add_index]
)
# Count the currently used tokens
current_tokens_used += tokens_to_add
# Move to the next most recent message in the full message history
next_message_to_add_index -= 1
# Insert Memories
if len(full_message_history) > 0:
(
newly_trimmed_messages,
agent.last_memory_index,
) = get_newly_trimmed_messages(
full_message_history=full_message_history,
current_context=current_context,
last_memory_index=agent.last_memory_index,
)
agent.summary_memory = update_running_summary(
current_memory=agent.summary_memory,
new_events=newly_trimmed_messages,
)
current_context.insert(insertion_index, agent.summary_memory)
api_manager = ApiManager()
# inform the AI about its remaining budget (if it has one)
if api_manager.get_total_budget() > 0.0:
remaining_budget = (
api_manager.get_total_budget() - api_manager.get_total_cost()
)
if remaining_budget < 0:
remaining_budget = 0
system_message = (
f"Your remaining API budget is ${remaining_budget:.3f}"
+ (
" BUDGET EXCEEDED! SHUT DOWN!\n\n"
if remaining_budget == 0
else " Budget very nearly exceeded! Shut down gracefully!\n\n"
if remaining_budget < 0.005
else " Budget nearly exceeded. Finish up.\n\n"
if remaining_budget < 0.01
else "\n\n"
)
)
logger.debug(system_message)
current_context.append(create_chat_message("system", system_message))
# Append user input, the length of this is accounted for above
current_context.extend([create_chat_message("user", user_input)])
plugin_count = len(cfg.plugins)
for i, plugin in enumerate(cfg.plugins):
if not plugin.can_handle_on_planning():
continue
plugin_response = plugin.on_planning(
agent.prompt_generator, current_context
)
if not plugin_response or plugin_response == "":
continue
tokens_to_add = count_message_tokens(
[create_chat_message("system", plugin_response)], model
)
if current_tokens_used + tokens_to_add > send_token_limit:
logger.debug("Plugin response too long, skipping:", plugin_response)
logger.debug("Plugins remaining at stop:", plugin_count - i)
break
current_context.append(create_chat_message("system", plugin_response))
# Calculate remaining tokens
tokens_remaining = token_limit - current_tokens_used
# assert tokens_remaining >= 0, "Tokens remaining is negative.
# This should never happen, please submit a bug report at
# https://www.github.com/Torantulino/Auto-GPT"
# Debug print the current context
logger.debug(f"Token limit: {token_limit}")
logger.debug(f"Send Token Count: {current_tokens_used}")
logger.debug(f"Tokens remaining for response: {tokens_remaining}")
logger.debug("------------ CONTEXT SENT TO AI ---------------")
for message in current_context:
# Skip printing the prompt
if message["role"] == "system" and message["content"] == prompt:
continue
logger.debug(f"{message['role'].capitalize()}: {message['content']}")
logger.debug("")
logger.debug("----------- END OF CONTEXT ----------------")
# TODO: use a model defined elsewhere, so that model can contain
# temperature and other settings we care about
assistant_reply = create_chat_completion(
model=model,
messages=current_context,
max_tokens=tokens_remaining,
)
# Update full message history
full_message_history.append(create_chat_message("user", user_input))
full_message_history.append(
create_chat_message("assistant", assistant_reply)
)
return assistant_reply
except RateLimitError:
# TODO: When we switch to langchain, this is built in
logger.warn("Error: ", "API Rate Limit Reached. Waiting 10 seconds...")
time.sleep(10)

258
autogpt/llm/llm_utils.py Normal file
View File

@@ -0,0 +1,258 @@
from __future__ import annotations
import functools
import time
from typing import List, Optional
import openai
from colorama import Fore, Style
from openai.error import APIError, RateLimitError, Timeout
from autogpt.config import Config
from autogpt.llm.api_manager import ApiManager
from autogpt.llm.base import Message
from autogpt.logs import logger
def retry_openai_api(
num_retries: int = 10,
backoff_base: float = 2.0,
warn_user: bool = True,
):
"""Retry an OpenAI API call.
Args:
num_retries int: Number of retries. Defaults to 10.
backoff_base float: Base for exponential backoff. Defaults to 2.
warn_user bool: Whether to warn the user. Defaults to True.
"""
retry_limit_msg = f"{Fore.RED}Error: " f"Reached rate limit, passing...{Fore.RESET}"
api_key_error_msg = (
f"Please double check that you have setup a "
f"{Fore.CYAN + Style.BRIGHT}PAID{Style.RESET_ALL} OpenAI API Account. You can "
f"read more here: {Fore.CYAN}https://significant-gravitas.github.io/Auto-GPT/setup/#getting-an-api-key{Fore.RESET}"
)
backoff_msg = (
f"{Fore.RED}Error: API Bad gateway. Waiting {{backoff}} seconds...{Fore.RESET}"
)
def _wrapper(func):
@functools.wraps(func)
def _wrapped(*args, **kwargs):
user_warned = not warn_user
num_attempts = num_retries + 1 # +1 for the first attempt
for attempt in range(1, num_attempts + 1):
try:
return func(*args, **kwargs)
except RateLimitError:
if attempt == num_attempts:
raise
logger.debug(retry_limit_msg)
if not user_warned:
logger.double_check(api_key_error_msg)
user_warned = True
except APIError as e:
if (e.http_status != 502) or (attempt == num_attempts):
raise
backoff = backoff_base ** (attempt + 2)
logger.debug(backoff_msg.format(backoff=backoff))
time.sleep(backoff)
return _wrapped
return _wrapper
def call_ai_function(
function: str, args: list, description: str, model: str | None = None
) -> str:
"""Call an AI function
This is a magic function that can do anything with no-code. See
https://github.com/Torantulino/AI-Functions for more info.
Args:
function (str): The function to call
args (list): The arguments to pass to the function
description (str): The description of the function
model (str, optional): The model to use. Defaults to None.
Returns:
str: The response from the function
"""
cfg = Config()
if model is None:
model = cfg.smart_llm_model
# For each arg, if any are None, convert to "None":
args = [str(arg) if arg is not None else "None" for arg in args]
# parse args to comma separated string
args: str = ", ".join(args)
messages: List[Message] = [
{
"role": "system",
"content": f"You are now the following python function: ```# {description}"
f"\n{function}```\n\nOnly respond with your `return` value.",
},
{"role": "user", "content": args},
]
return create_chat_completion(model=model, messages=messages, temperature=0)
# Overly simple abstraction until we create something better
# simple retry mechanism when getting a rate error or a bad gateway
def create_chat_completion(
messages: List[Message], # type: ignore
model: Optional[str] = None,
temperature: float = None,
max_tokens: Optional[int] = None,
) -> str:
"""Create a chat completion using the OpenAI API
Args:
messages (List[Message]): The messages to send to the chat completion
model (str, optional): The model to use. Defaults to None.
temperature (float, optional): The temperature to use. Defaults to 0.9.
max_tokens (int, optional): The max tokens to use. Defaults to None.
Returns:
str: The response from the chat completion
"""
cfg = Config()
if temperature is None:
temperature = cfg.temperature
num_retries = 10
warned_user = False
logger.debug(
f"{Fore.GREEN}Creating chat completion with model {model}, temperature {temperature}, max_tokens {max_tokens}{Fore.RESET}"
)
for plugin in cfg.plugins:
if plugin.can_handle_chat_completion(
messages=messages,
model=model,
temperature=temperature,
max_tokens=max_tokens,
):
message = plugin.handle_chat_completion(
messages=messages,
model=model,
temperature=temperature,
max_tokens=max_tokens,
)
if message is not None:
return message
api_manager = ApiManager()
response = None
for attempt in range(num_retries):
backoff = 2 ** (attempt + 2)
try:
if cfg.use_azure:
response = api_manager.create_chat_completion(
deployment_id=cfg.get_azure_deployment_id_for_model(model),
model=model,
messages=messages,
temperature=temperature,
max_tokens=max_tokens,
)
else:
response = api_manager.create_chat_completion(
model=model,
messages=messages,
temperature=temperature,
max_tokens=max_tokens,
)
break
except RateLimitError:
logger.debug(
f"{Fore.RED}Error: ", f"Reached rate limit, passing...{Fore.RESET}"
)
if not warned_user:
logger.double_check(
f"Please double check that you have setup a {Fore.CYAN + Style.BRIGHT}PAID{Style.RESET_ALL} OpenAI API Account. "
+ f"You can read more here: {Fore.CYAN}https://significant-gravitas.github.io/Auto-GPT/setup/#getting-an-api-key{Fore.RESET}"
)
warned_user = True
except (APIError, Timeout) as e:
if e.http_status != 502:
raise
if attempt == num_retries - 1:
raise
logger.debug(
f"{Fore.RED}Error: ",
f"API Bad gateway. Waiting {backoff} seconds...{Fore.RESET}",
)
time.sleep(backoff)
if response is None:
logger.typewriter_log(
"FAILED TO GET RESPONSE FROM OPENAI",
Fore.RED,
"Auto-GPT has failed to get a response from OpenAI's services. "
+ f"Try running Auto-GPT again, and if the problem the persists try running it with `{Fore.CYAN}--debug{Fore.RESET}`.",
)
logger.double_check()
if cfg.debug_mode:
raise RuntimeError(f"Failed to get response after {num_retries} retries")
else:
quit(1)
resp = response.choices[0].message["content"]
for plugin in cfg.plugins:
if not plugin.can_handle_on_response():
continue
resp = plugin.on_response(resp)
return resp
def get_ada_embedding(text: str) -> List[float]:
"""Get an embedding from the ada model.
Args:
text (str): The text to embed.
Returns:
List[float]: The embedding.
"""
cfg = Config()
model = "text-embedding-ada-002"
text = text.replace("\n", " ")
if cfg.use_azure:
kwargs = {"engine": cfg.get_azure_deployment_id_for_model(model)}
else:
kwargs = {"model": model}
embedding = create_embedding(text, **kwargs)
api_manager = ApiManager()
api_manager.update_cost(
prompt_tokens=embedding.usage.prompt_tokens,
completion_tokens=0,
model=model,
)
return embedding["data"][0]["embedding"]
@retry_openai_api()
def create_embedding(
text: str,
*_,
**kwargs,
) -> openai.Embedding:
"""Create an embedding using the OpenAI API
Args:
text (str): The text to embed.
kwargs: Other arguments to pass to the OpenAI API embedding creation call.
Returns:
openai.Embedding: The embedding object.
"""
cfg = Config()
return openai.Embedding.create(
input=[text],
api_key=cfg.openai_api_key,
**kwargs,
)

View File

@@ -0,0 +1,7 @@
COSTS = {
"gpt-3.5-turbo": {"prompt": 0.002, "completion": 0.002},
"gpt-3.5-turbo-0301": {"prompt": 0.002, "completion": 0.002},
"gpt-4-0314": {"prompt": 0.03, "completion": 0.06},
"gpt-4": {"prompt": 0.03, "completion": 0.06},
"text-embedding-ada-002": {"prompt": 0.0004, "completion": 0.0},
}

View File

View File

@@ -0,0 +1,37 @@
from autogpt.llm.base import ChatModelInfo, EmbeddingModelInfo
OPEN_AI_CHAT_MODELS = {
"gpt-3.5-turbo": ChatModelInfo(
name="gpt-3.5-turbo",
prompt_token_cost=0.002,
completion_token_cost=0.002,
max_tokens=4096,
),
"gpt-4": ChatModelInfo(
name="gpt-4",
prompt_token_cost=0.03,
completion_token_cost=0.06,
max_tokens=8192,
),
"gpt-4-32k": ChatModelInfo(
name="gpt-4-32k",
prompt_token_cost=0.06,
completion_token_cost=0.12,
max_tokens=32768,
),
}
OPEN_AI_EMBEDDING_MODELS = {
"text-embedding-ada-002": EmbeddingModelInfo(
name="text-embedding-ada-002",
prompt_token_cost=0.0004,
completion_token_cost=0.0,
max_tokens=8191,
embedding_dimensions=1536,
),
}
OPEN_AI_MODELS = {
**OPEN_AI_CHAT_MODELS,
**OPEN_AI_EMBEDDING_MODELS,
}

View File

@@ -1,16 +1,28 @@
import tiktoken
from typing import List, Dict
"""Functions for counting the number of tokens in a message or string."""
from __future__ import annotations
def count_message_tokens(messages : List[Dict[str, str]], model : str = "gpt-3.5-turbo-0301") -> int:
from typing import List
import tiktoken
from autogpt.llm.base import Message
from autogpt.logs import logger
def count_message_tokens(
messages: List[Message], model: str = "gpt-3.5-turbo-0301"
) -> int:
"""
Returns the number of tokens used by a list of messages.
Args:
messages (list): A list of messages, each of which is a dictionary containing the role and content of the message.
model (str): The name of the model to use for tokenization. Defaults to "gpt-3.5-turbo-0301".
messages (list): A list of messages, each of which is a dictionary
containing the role and content of the message.
model (str): The name of the model to use for tokenization.
Defaults to "gpt-3.5-turbo-0301".
Returns:
int: The number of tokens used by the list of messages.
int: The number of tokens used by the list of messages.
"""
try:
encoding = tiktoken.encoding_for_model(model)
@@ -18,19 +30,26 @@ def count_message_tokens(messages : List[Dict[str, str]], model : str = "gpt-3.5
logger.warn("Warning: model not found. Using cl100k_base encoding.")
encoding = tiktoken.get_encoding("cl100k_base")
if model == "gpt-3.5-turbo":
# !Node: gpt-3.5-turbo may change over time. Returning num tokens assuming gpt-3.5-turbo-0301.")
# !Note: gpt-3.5-turbo may change over time.
# Returning num tokens assuming gpt-3.5-turbo-0301.")
return count_message_tokens(messages, model="gpt-3.5-turbo-0301")
elif model == "gpt-4":
# !Note: gpt-4 may change over time. Returning num tokens assuming gpt-4-0314.")
return count_message_tokens(messages, model="gpt-4-0314")
elif model == "gpt-3.5-turbo-0301":
tokens_per_message = 4 # every message follows <|start|>{role/name}\n{content}<|end|>\n
tokens_per_message = (
4 # every message follows <|start|>{role/name}\n{content}<|end|>\n
)
tokens_per_name = -1 # if there's a name, the role is omitted
elif model == "gpt-4-0314":
tokens_per_message = 3
tokens_per_name = 1
else:
raise NotImplementedError(f"""num_tokens_from_messages() is not implemented for model {model}. See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens.""")
raise NotImplementedError(
f"num_tokens_from_messages() is not implemented for model {model}.\n"
" See https://github.com/openai/openai-python/blob/main/chatml.md for"
" information on how messages are converted to tokens."
)
num_tokens = 0
for message in messages:
num_tokens += tokens_per_message
@@ -41,17 +60,17 @@ def count_message_tokens(messages : List[Dict[str, str]], model : str = "gpt-3.5
num_tokens += 3 # every reply is primed with <|start|>assistant<|message|>
return num_tokens
def count_string_tokens(string: str, model_name: str) -> int:
"""
Returns the number of tokens in a text string.
Args:
string (str): The text string.
model_name (str): The name of the encoding to use. (e.g., "gpt-3.5-turbo")
string (str): The text string.
model_name (str): The name of the encoding to use. (e.g., "gpt-3.5-turbo")
Returns:
int: The number of tokens in the text string.
int: The number of tokens in the text string.
"""
encoding = tiktoken.encoding_for_model(model_name)
num_tokens = len(encoding.encode(string))
return num_tokens
return len(encoding.encode(string))

256
autogpt/logs.py Normal file
View File

@@ -0,0 +1,256 @@
"""Logging module for Auto-GPT."""
import logging
import os
import random
import re
import time
from logging import LogRecord
from colorama import Fore, Style
from autogpt.singleton import Singleton
from autogpt.speech import say_text
class Logger(metaclass=Singleton):
"""
Logger that handle titles in different colors.
Outputs logs in console, activity.log, and errors.log
For console handler: simulates typing
"""
def __init__(self):
# create log directory if it doesn't exist
this_files_dir_path = os.path.dirname(__file__)
log_dir = os.path.join(this_files_dir_path, "../logs")
if not os.path.exists(log_dir):
os.makedirs(log_dir)
log_file = "activity.log"
error_file = "error.log"
console_formatter = AutoGptFormatter("%(title_color)s %(message)s")
# Create a handler for console which simulate typing
self.typing_console_handler = TypingConsoleHandler()
self.typing_console_handler.setLevel(logging.INFO)
self.typing_console_handler.setFormatter(console_formatter)
# Create a handler for console without typing simulation
self.console_handler = ConsoleHandler()
self.console_handler.setLevel(logging.DEBUG)
self.console_handler.setFormatter(console_formatter)
# Info handler in activity.log
self.file_handler = logging.FileHandler(
os.path.join(log_dir, log_file), "a", "utf-8"
)
self.file_handler.setLevel(logging.DEBUG)
info_formatter = AutoGptFormatter(
"%(asctime)s %(levelname)s %(title)s %(message_no_color)s"
)
self.file_handler.setFormatter(info_formatter)
# Error handler error.log
error_handler = logging.FileHandler(
os.path.join(log_dir, error_file), "a", "utf-8"
)
error_handler.setLevel(logging.ERROR)
error_formatter = AutoGptFormatter(
"%(asctime)s %(levelname)s %(module)s:%(funcName)s:%(lineno)d %(title)s"
" %(message_no_color)s"
)
error_handler.setFormatter(error_formatter)
self.typing_logger = logging.getLogger("TYPER")
self.typing_logger.addHandler(self.typing_console_handler)
self.typing_logger.addHandler(self.file_handler)
self.typing_logger.addHandler(error_handler)
self.typing_logger.setLevel(logging.DEBUG)
self.logger = logging.getLogger("LOGGER")
self.logger.addHandler(self.console_handler)
self.logger.addHandler(self.file_handler)
self.logger.addHandler(error_handler)
self.logger.setLevel(logging.DEBUG)
self.speak_mode = False
def typewriter_log(
self, title="", title_color="", content="", speak_text=False, level=logging.INFO
):
if speak_text and self.speak_mode:
say_text(f"{title}. {content}")
if content:
if isinstance(content, list):
content = " ".join(content)
else:
content = ""
self.typing_logger.log(
level, content, extra={"title": title, "color": title_color}
)
def debug(
self,
message,
title="",
title_color="",
):
self._log(title, title_color, message, logging.DEBUG)
def info(
self,
message,
title="",
title_color="",
):
self._log(title, title_color, message, logging.INFO)
def warn(
self,
message,
title="",
title_color="",
):
self._log(title, title_color, message, logging.WARN)
def error(self, title, message=""):
self._log(title, Fore.RED, message, logging.ERROR)
def _log(
self,
title: str = "",
title_color: str = "",
message: str = "",
level=logging.INFO,
):
if message:
if isinstance(message, list):
message = " ".join(message)
self.logger.log(
level, message, extra={"title": str(title), "color": str(title_color)}
)
def set_level(self, level):
self.logger.setLevel(level)
self.typing_logger.setLevel(level)
def double_check(self, additionalText=None):
if not additionalText:
additionalText = (
"Please ensure you've setup and configured everything"
" correctly. Read https://github.com/Torantulino/Auto-GPT#readme to "
"double check. You can also create a github issue or join the discord"
" and ask there!"
)
self.typewriter_log("DOUBLE CHECK CONFIGURATION", Fore.YELLOW, additionalText)
"""
Output stream to console using simulated typing
"""
class TypingConsoleHandler(logging.StreamHandler):
def emit(self, record):
min_typing_speed = 0.05
max_typing_speed = 0.01
msg = self.format(record)
try:
words = msg.split()
for i, word in enumerate(words):
print(word, end="", flush=True)
if i < len(words) - 1:
print(" ", end="", flush=True)
typing_speed = random.uniform(min_typing_speed, max_typing_speed)
time.sleep(typing_speed)
# type faster after each word
min_typing_speed = min_typing_speed * 0.95
max_typing_speed = max_typing_speed * 0.95
print()
except Exception:
self.handleError(record)
class ConsoleHandler(logging.StreamHandler):
def emit(self, record) -> None:
msg = self.format(record)
try:
print(msg)
except Exception:
self.handleError(record)
class AutoGptFormatter(logging.Formatter):
"""
Allows to handle custom placeholders 'title_color' and 'message_no_color'.
To use this formatter, make sure to pass 'color', 'title' as log extras.
"""
def format(self, record: LogRecord) -> str:
if hasattr(record, "color"):
record.title_color = (
getattr(record, "color")
+ getattr(record, "title")
+ " "
+ Style.RESET_ALL
)
else:
record.title_color = getattr(record, "title")
if hasattr(record, "msg"):
record.message_no_color = remove_color_codes(getattr(record, "msg"))
else:
record.message_no_color = ""
return super().format(record)
def remove_color_codes(s: str) -> str:
ansi_escape = re.compile(r"\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])")
return ansi_escape.sub("", s)
logger = Logger()
def print_assistant_thoughts(
ai_name: object,
assistant_reply_json_valid: object,
speak_mode: bool = False,
) -> None:
assistant_thoughts_reasoning = None
assistant_thoughts_plan = None
assistant_thoughts_speak = None
assistant_thoughts_criticism = None
assistant_thoughts = assistant_reply_json_valid.get("thoughts", {})
assistant_thoughts_text = assistant_thoughts.get("text")
if assistant_thoughts:
assistant_thoughts_reasoning = assistant_thoughts.get("reasoning")
assistant_thoughts_plan = assistant_thoughts.get("plan")
assistant_thoughts_criticism = assistant_thoughts.get("criticism")
assistant_thoughts_speak = assistant_thoughts.get("speak")
logger.typewriter_log(
f"{ai_name.upper()} THOUGHTS:", Fore.YELLOW, f"{assistant_thoughts_text}"
)
logger.typewriter_log("REASONING:", Fore.YELLOW, f"{assistant_thoughts_reasoning}")
if assistant_thoughts_plan:
logger.typewriter_log("PLAN:", Fore.YELLOW, "")
# If it's a list, join it into a string
if isinstance(assistant_thoughts_plan, list):
assistant_thoughts_plan = "\n".join(assistant_thoughts_plan)
elif isinstance(assistant_thoughts_plan, dict):
assistant_thoughts_plan = str(assistant_thoughts_plan)
# Split the input_string using the newline character and dashes
lines = assistant_thoughts_plan.split("\n")
for line in lines:
line = line.lstrip("- ")
logger.typewriter_log("- ", Fore.GREEN, line.strip())
logger.typewriter_log("CRITICISM:", Fore.YELLOW, f"{assistant_thoughts_criticism}")
# Speak the assistant's thoughts
if speak_mode and assistant_thoughts_speak:
say_text(assistant_thoughts_speak)

150
autogpt/main.py Normal file
View File

@@ -0,0 +1,150 @@
"""The application entry point. Can be invoked by a CLI or any other front end application."""
import logging
import sys
from pathlib import Path
from colorama import Fore
from autogpt.agent.agent import Agent
from autogpt.commands.command import CommandRegistry
from autogpt.config import Config, check_openai_api_key
from autogpt.configurator import create_config
from autogpt.logs import logger
from autogpt.memory import get_memory
from autogpt.plugins import scan_plugins
from autogpt.prompts.prompt import DEFAULT_TRIGGERING_PROMPT, construct_main_ai_config
from autogpt.utils import get_current_git_branch, get_latest_bulletin
from autogpt.workspace import Workspace
from scripts.install_plugin_deps import install_plugin_dependencies
def run_auto_gpt(
continuous: bool,
continuous_limit: int,
ai_settings: str,
skip_reprompt: bool,
speak: bool,
debug: bool,
gpt3only: bool,
gpt4only: bool,
memory_type: str,
browser_name: str,
allow_downloads: bool,
skip_news: bool,
workspace_directory: str,
install_plugin_deps: bool,
):
# Configure logging before we do anything else.
logger.set_level(logging.DEBUG if debug else logging.INFO)
logger.speak_mode = speak
cfg = Config()
# TODO: fill in llm values here
check_openai_api_key()
create_config(
continuous,
continuous_limit,
ai_settings,
skip_reprompt,
speak,
debug,
gpt3only,
gpt4only,
memory_type,
browser_name,
allow_downloads,
skip_news,
)
if not cfg.skip_news:
motd = get_latest_bulletin()
if motd:
logger.typewriter_log("NEWS: ", Fore.GREEN, motd)
git_branch = get_current_git_branch()
if git_branch and git_branch != "stable":
logger.typewriter_log(
"WARNING: ",
Fore.RED,
f"You are running on `{git_branch}` branch "
"- this is not a supported branch.",
)
if sys.version_info < (3, 10):
logger.typewriter_log(
"WARNING: ",
Fore.RED,
"You are running on an older version of Python. "
"Some people have observed problems with certain "
"parts of Auto-GPT with this version. "
"Please consider upgrading to Python 3.10 or higher.",
)
if install_plugin_deps:
install_plugin_dependencies()
# TODO: have this directory live outside the repository (e.g. in a user's
# home directory) and have it come in as a command line argument or part of
# the env file.
if workspace_directory is None:
workspace_directory = Path(__file__).parent / "auto_gpt_workspace"
else:
workspace_directory = Path(workspace_directory)
# TODO: pass in the ai_settings file and the env file and have them cloned into
# the workspace directory so we can bind them to the agent.
workspace_directory = Workspace.make_workspace(workspace_directory)
cfg.workspace_path = str(workspace_directory)
# HACK: doing this here to collect some globals that depend on the workspace.
file_logger_path = workspace_directory / "file_logger.txt"
if not file_logger_path.exists():
with file_logger_path.open(mode="w", encoding="utf-8") as f:
f.write("File Operation Logger ")
cfg.file_logger_path = str(file_logger_path)
cfg.set_plugins(scan_plugins(cfg, cfg.debug_mode))
# Create a CommandRegistry instance and scan default folder
command_registry = CommandRegistry()
command_registry.import_commands("autogpt.commands.analyze_code")
command_registry.import_commands("autogpt.commands.audio_text")
command_registry.import_commands("autogpt.commands.execute_code")
command_registry.import_commands("autogpt.commands.file_operations")
command_registry.import_commands("autogpt.commands.git_operations")
command_registry.import_commands("autogpt.commands.google_search")
command_registry.import_commands("autogpt.commands.image_gen")
command_registry.import_commands("autogpt.commands.improve_code")
command_registry.import_commands("autogpt.commands.twitter")
command_registry.import_commands("autogpt.commands.web_selenium")
command_registry.import_commands("autogpt.commands.write_tests")
command_registry.import_commands("autogpt.app")
ai_name = ""
ai_config = construct_main_ai_config()
ai_config.command_registry = command_registry
# print(prompt)
# Initialize variables
full_message_history = []
next_action_count = 0
# Initialize memory and make sure it is empty.
# this is particularly important for indexing and referencing pinecone memory
memory = get_memory(cfg, init=True)
logger.typewriter_log(
"Using memory of type:", Fore.GREEN, f"{memory.__class__.__name__}"
)
logger.typewriter_log("Using Browser:", Fore.GREEN, cfg.selenium_web_browser)
system_prompt = ai_config.construct_full_prompt()
if cfg.debug_mode:
logger.typewriter_log("Prompt:", Fore.GREEN, system_prompt)
agent = Agent(
ai_name=ai_name,
memory=memory,
full_message_history=full_message_history,
next_action_count=next_action_count,
command_registry=command_registry,
config=ai_config,
system_prompt=system_prompt,
triggering_prompt=DEFAULT_TRIGGERING_PROMPT,
workspace_directory=workspace_directory,
)
agent.start_interaction_loop()

View File

@@ -0,0 +1,96 @@
from autogpt.logs import logger
from autogpt.memory.local import LocalCache
from autogpt.memory.no_memory import NoMemory
# List of supported memory backends
# Add a backend to this list if the import attempt is successful
supported_memory = ["local", "no_memory"]
try:
from autogpt.memory.redismem import RedisMemory
supported_memory.append("redis")
except ImportError:
RedisMemory = None
try:
from autogpt.memory.pinecone import PineconeMemory
supported_memory.append("pinecone")
except ImportError:
PineconeMemory = None
try:
from autogpt.memory.weaviate import WeaviateMemory
supported_memory.append("weaviate")
except ImportError:
WeaviateMemory = None
try:
from autogpt.memory.milvus import MilvusMemory
supported_memory.append("milvus")
except ImportError:
MilvusMemory = None
def get_memory(cfg, init=False):
memory = None
if cfg.memory_backend == "pinecone":
if not PineconeMemory:
logger.warn(
"Error: Pinecone is not installed. Please install pinecone"
" to use Pinecone as a memory backend."
)
else:
memory = PineconeMemory(cfg)
if init:
memory.clear()
elif cfg.memory_backend == "redis":
if not RedisMemory:
logger.warn(
"Error: Redis is not installed. Please install redis-py to"
" use Redis as a memory backend."
)
else:
memory = RedisMemory(cfg)
elif cfg.memory_backend == "weaviate":
if not WeaviateMemory:
logger.warn(
"Error: Weaviate is not installed. Please install weaviate-client to"
" use Weaviate as a memory backend."
)
else:
memory = WeaviateMemory(cfg)
elif cfg.memory_backend == "milvus":
if not MilvusMemory:
logger.warn(
"Error: pymilvus sdk is not installed."
"Please install pymilvus to use Milvus or Zilliz Cloud as memory backend."
)
else:
memory = MilvusMemory(cfg)
elif cfg.memory_backend == "no_memory":
memory = NoMemory(cfg)
if memory is None:
memory = LocalCache(cfg)
if init:
memory.clear()
return memory
def get_supported_memory_backends():
return supported_memory
__all__ = [
"get_memory",
"LocalCache",
"RedisMemory",
"PineconeMemory",
"NoMemory",
"MilvusMemory",
"WeaviateMemory",
]

View File

@@ -1,35 +1,31 @@
"""Base class for memory providers."""
import abc
from config import AbstractSingleton, Config
import openai
cfg = Config()
def get_ada_embedding(text):
text = text.replace("\n", " ")
if cfg.use_azure:
return openai.Embedding.create(input=[text], engine=cfg.azure_embeddigs_deployment_id, model="text-embedding-ada-002")["data"][0]["embedding"]
else:
return openai.Embedding.create(input=[text], model="text-embedding-ada-002")["data"][0]["embedding"]
from autogpt.singleton import AbstractSingleton
class MemoryProviderSingleton(AbstractSingleton):
@abc.abstractmethod
def add(self, data):
"""Adds to memory"""
pass
@abc.abstractmethod
def get(self, data):
"""Gets from memory"""
pass
@abc.abstractmethod
def clear(self):
"""Clears memory"""
pass
@abc.abstractmethod
def get_relevant(self, data, num_relevant=5):
"""Gets relevant memory for"""
pass
@abc.abstractmethod
def get_stats(self):
"""Get stats from memory"""
pass

View File

@@ -1,10 +1,14 @@
import dataclasses
import orjson
from typing import Any, List, Optional
import numpy as np
import os
from memory.base import MemoryProviderSingleton, get_ada_embedding
from __future__ import annotations
import dataclasses
from pathlib import Path
from typing import Any, List
import numpy as np
import orjson
from autogpt.llm import get_ada_embedding
from autogpt.memory.base import MemoryProviderSingleton
EMBED_DIM = 1536
SAVE_OPTIONS = orjson.OPT_SERIALIZE_NUMPY | orjson.OPT_SERIALIZE_DATACLASS
@@ -23,26 +27,27 @@ class CacheContent:
class LocalCache(MemoryProviderSingleton):
"""A class that stores the memory in a local file"""
# on load, load our database
def __init__(self, cfg) -> None:
self.filename = f"{cfg.memory_index}.json"
if os.path.exists(self.filename):
try:
with open(self.filename, 'w+b') as f:
file_content = f.read()
if not file_content.strip():
file_content = b'{}'
f.write(file_content)
"""Initialize a class instance
loaded = orjson.loads(file_content)
self.data = CacheContent(**loaded)
except orjson.JSONDecodeError:
print(f"Error: The file '{self.filename}' is not in JSON format.")
self.data = CacheContent()
else:
print(f"Warning: The file '{self.filename}' does not exist. Local memory would not be saved to a file.")
self.data = CacheContent()
Args:
cfg: Config object
Returns:
None
"""
workspace_path = Path(cfg.workspace_path)
self.filename = workspace_path / f"{cfg.memory_index}.json"
self.filename.touch(exist_ok=True)
file_content = b"{}"
with self.filename.open("w+b") as f:
f.write(file_content)
self.data = CacheContent()
def add(self, text: str):
"""
@@ -54,7 +59,7 @@ class LocalCache(MemoryProviderSingleton):
Returns: None
"""
if 'Command Error:' in text:
if "Command Error:" in text:
return ""
self.data.texts.append(text)
@@ -70,24 +75,21 @@ class LocalCache(MemoryProviderSingleton):
axis=0,
)
with open(self.filename, 'wb') as f:
out = orjson.dumps(
self.data,
option=SAVE_OPTIONS
)
with open(self.filename, "wb") as f:
out = orjson.dumps(self.data, option=SAVE_OPTIONS)
f.write(out)
return text
def clear(self) -> str:
"""
Clears the redis server.
Clears the data in memory.
Returns: A message indicating that the memory has been cleared.
"""
self.data = CacheContent()
return "Obliviated"
def get(self, data: str) -> Optional[List[Any]]:
def get(self, data: str) -> list[Any] | None:
"""
Gets the data from the memory that is most relevant to the given data.
@@ -98,8 +100,8 @@ class LocalCache(MemoryProviderSingleton):
"""
return self.get_relevant(data, 1)
def get_relevant(self, text: str, k: int) -> List[Any]:
""""
def get_relevant(self, text: str, k: int) -> list[Any]:
""" "
matrix-vector mult to find score-for-each-row-of-matrix
get indices for top-k winning scores
return texts for those indices
@@ -117,7 +119,7 @@ class LocalCache(MemoryProviderSingleton):
return [self.data.texts[i] for i in top_k_indices]
def get_stats(self):
def get_stats(self) -> tuple[int, tuple[int, ...]]:
"""
Returns: The stats of the local cache.
"""

162
autogpt/memory/milvus.py Normal file
View File

@@ -0,0 +1,162 @@
""" Milvus memory storage provider."""
import re
from pymilvus import Collection, CollectionSchema, DataType, FieldSchema, connections
from autogpt.config import Config
from autogpt.llm import get_ada_embedding
from autogpt.memory.base import MemoryProviderSingleton
class MilvusMemory(MemoryProviderSingleton):
"""Milvus memory storage provider."""
def __init__(self, cfg: Config) -> None:
"""Construct a milvus memory storage connection.
Args:
cfg (Config): Auto-GPT global config.
"""
self.configure(cfg)
connect_kwargs = {}
if self.username:
connect_kwargs["user"] = self.username
connect_kwargs["password"] = self.password
connections.connect(
**connect_kwargs,
uri=self.uri or "",
address=self.address or "",
secure=self.secure,
)
self.init_collection()
def configure(self, cfg: Config) -> None:
# init with configuration.
self.uri = None
self.address = cfg.milvus_addr
self.secure = cfg.milvus_secure
self.username = cfg.milvus_username
self.password = cfg.milvus_password
self.collection_name = cfg.milvus_collection
# use HNSW by default.
self.index_params = {
"metric_type": "IP",
"index_type": "HNSW",
"params": {"M": 8, "efConstruction": 64},
}
if (self.username is None) != (self.password is None):
raise ValueError(
"Both username and password must be set to use authentication for Milvus"
)
# configured address may be a full URL.
if re.match(r"^(https?|tcp)://", self.address) is not None:
self.uri = self.address
self.address = None
if self.uri.startswith("https"):
self.secure = True
# Zilliz Cloud requires AutoIndex.
if re.match(r"^https://(.*)\.zillizcloud\.(com|cn)", self.uri) is not None:
self.index_params = {
"metric_type": "IP",
"index_type": "AUTOINDEX",
"params": {},
}
def init_collection(self) -> None:
"""Initialize collection in vector database."""
fields = [
FieldSchema(name="pk", dtype=DataType.INT64, is_primary=True, auto_id=True),
FieldSchema(name="embeddings", dtype=DataType.FLOAT_VECTOR, dim=1536),
FieldSchema(name="raw_text", dtype=DataType.VARCHAR, max_length=65535),
]
# create collection if not exist and load it.
self.schema = CollectionSchema(fields, "auto-gpt memory storage")
self.collection = Collection(self.collection_name, self.schema)
# create index if not exist.
if not self.collection.has_index():
self.collection.release()
self.collection.create_index(
"embeddings",
self.index_params,
index_name="embeddings",
)
self.collection.load()
def add(self, data) -> str:
"""Add an embedding of data into memory.
Args:
data (str): The raw text to construct embedding index.
Returns:
str: log.
"""
embedding = get_ada_embedding(data)
result = self.collection.insert([[embedding], [data]])
_text = (
"Inserting data into memory at primary key: "
f"{result.primary_keys[0]}:\n data: {data}"
)
return _text
def get(self, data):
"""Return the most relevant data in memory.
Args:
data: The data to compare to.
"""
return self.get_relevant(data, 1)
def clear(self) -> str:
"""Drop the index in memory.
Returns:
str: log.
"""
self.collection.drop()
self.collection = Collection(self.collection_name, self.schema)
self.collection.create_index(
"embeddings",
self.index_params,
index_name="embeddings",
)
self.collection.load()
return "Obliviated"
def get_relevant(self, data: str, num_relevant: int = 5):
"""Return the top-k relevant data in memory.
Args:
data: The data to compare to.
num_relevant (int, optional): The max number of relevant data.
Defaults to 5.
Returns:
list: The top-k relevant data.
"""
# search the embedding and return the most relevant text.
embedding = get_ada_embedding(data)
search_params = {
"metrics_type": "IP",
"params": {"nprobe": 8},
}
result = self.collection.search(
[embedding],
"embeddings",
search_params,
num_relevant,
output_fields=["raw_text"],
)
return [item.entity.value_of_field("raw_text") for item in result[0]]
def get_stats(self) -> str:
"""
Returns: The stats of the milvus cache.
"""
return f"Entities num: {self.collection.num_entities}"

View File

@@ -0,0 +1,73 @@
"""A class that does not store any data. This is the default memory provider."""
from __future__ import annotations
from typing import Any
from autogpt.memory.base import MemoryProviderSingleton
class NoMemory(MemoryProviderSingleton):
"""
A class that does not store any data. This is the default memory provider.
"""
def __init__(self, cfg):
"""
Initializes the NoMemory provider.
Args:
cfg: The config object.
Returns: None
"""
pass
def add(self, data: str) -> str:
"""
Adds a data point to the memory. No action is taken in NoMemory.
Args:
data: The data to add.
Returns: An empty string.
"""
return ""
def get(self, data: str) -> list[Any] | None:
"""
Gets the data from the memory that is most relevant to the given data.
NoMemory always returns None.
Args:
data: The data to compare to.
Returns: None
"""
return None
def clear(self) -> str:
"""
Clears the memory. No action is taken in NoMemory.
Returns: An empty string.
"""
return ""
def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None:
"""
Returns all the data in the memory that is relevant to the given data.
NoMemory always returns None.
Args:
data: The data to compare to.
num_relevant: The number of relevant data to return.
Returns: None
"""
return None
def get_stats(self):
"""
Returns: An empty dictionary as there are no stats in NoMemory.
"""
return {}

View File

@@ -1,7 +1,9 @@
import pinecone
from colorama import Fore, Style
from memory.base import MemoryProviderSingleton, get_ada_embedding
from autogpt.llm import get_ada_embedding
from autogpt.logs import logger
from autogpt.memory.base import MemoryProviderSingleton
class PineconeMemory(MemoryProviderSingleton):
@@ -15,16 +17,36 @@ class PineconeMemory(MemoryProviderSingleton):
table_name = "auto-gpt"
# this assumes we don't start with memory.
# for now this works.
# we'll need a more complicated and robust system if we want to start with memory.
# we'll need a more complicated and robust system if we want to start with
# memory.
self.vec_num = 0
try:
pinecone.whoami()
except Exception as e:
logger.typewriter_log(
"FAILED TO CONNECT TO PINECONE",
Fore.RED,
Style.BRIGHT + str(e) + Style.RESET_ALL,
)
logger.double_check(
"Please ensure you have setup and configured Pinecone properly for use."
+ f"You can check out {Fore.CYAN + Style.BRIGHT}"
"https://github.com/Torantulino/Auto-GPT#-pinecone-api-key-setup"
f"{Style.RESET_ALL} to ensure you've set up everything correctly."
)
exit(1)
if table_name not in pinecone.list_indexes():
pinecone.create_index(table_name, dimension=dimension, metric=metric, pod_type=pod_type)
pinecone.create_index(
table_name, dimension=dimension, metric=metric, pod_type=pod_type
)
self.index = pinecone.Index(table_name)
def add(self, data):
vector = get_ada_embedding(data)
# no metadata here. We may wish to change that long term.
resp = self.index.upsert([(str(self.vec_num), vector, {"raw_text": data})])
self.index.upsert([(str(self.vec_num), vector, {"raw_text": data})])
_text = f"Inserting data into memory at index: {self.vec_num}:\n data: {data}"
self.vec_num += 1
return _text
@@ -43,9 +65,11 @@ class PineconeMemory(MemoryProviderSingleton):
:param num_relevant: The number of relevant data to return. Defaults to 5
"""
query_embedding = get_ada_embedding(data)
results = self.index.query(query_embedding, top_k=num_relevant, include_metadata=True)
results = self.index.query(
query_embedding, top_k=num_relevant, include_metadata=True
)
sorted_results = sorted(results.matches, key=lambda x: x.score)
return [str(item['metadata']["raw_text"]) for item in sorted_results]
return [str(item["metadata"]["raw_text"]) for item in sorted_results]
def get_stats(self):
return self.index.describe_index_stats()

View File

@@ -1,24 +1,25 @@
"""Redis memory provider."""
from typing import Any, List, Optional
import redis
from redis.commands.search.field import VectorField, TextField
from redis.commands.search.query import Query
from redis.commands.search.indexDefinition import IndexDefinition, IndexType
from __future__ import annotations
from typing import Any
import numpy as np
import redis
from colorama import Fore, Style
from redis.commands.search.field import TextField, VectorField
from redis.commands.search.indexDefinition import IndexDefinition, IndexType
from redis.commands.search.query import Query
from memory.base import MemoryProviderSingleton, get_ada_embedding
from autogpt.llm import get_ada_embedding
from autogpt.logs import logger
from autogpt.memory.base import MemoryProviderSingleton
SCHEMA = [
TextField("data"),
VectorField(
"embedding",
"HNSW",
{
"TYPE": "FLOAT32",
"DIM": 1536,
"DISTANCE_METRIC": "COSINE"
}
{"TYPE": "FLOAT32", "DIM": 1536, "DISTANCE_METRIC": "COSINE"},
),
]
@@ -41,24 +42,40 @@ class RedisMemory(MemoryProviderSingleton):
host=redis_host,
port=redis_port,
password=redis_password,
db=0 # Cannot be changed
db=0, # Cannot be changed
)
self.cfg = cfg
# Check redis connection
try:
self.redis.ping()
except redis.ConnectionError as e:
logger.typewriter_log(
"FAILED TO CONNECT TO REDIS",
Fore.RED,
Style.BRIGHT + str(e) + Style.RESET_ALL,
)
logger.double_check(
"Please ensure you have setup and configured Redis properly for use. "
+ f"You can check out {Fore.CYAN + Style.BRIGHT}"
f"https://github.com/Torantulino/Auto-GPT#redis-setup{Style.RESET_ALL}"
" to ensure you've set up everything correctly."
)
exit(1)
if cfg.wipe_redis_on_start:
self.redis.flushall()
try:
self.redis.ft(f"{cfg.memory_index}").create_index(
fields=SCHEMA,
definition=IndexDefinition(
prefix=[f"{cfg.memory_index}:"],
index_type=IndexType.HASH
)
)
prefix=[f"{cfg.memory_index}:"], index_type=IndexType.HASH
),
)
except Exception as e:
print("Error creating Redis search index: ", e)
existing_vec_num = self.redis.get(f'{cfg.memory_index}-vec_num')
self.vec_num = int(existing_vec_num.decode('utf-8')) if\
existing_vec_num else 0
logger.warn("Error creating Redis search index: ", e)
existing_vec_num = self.redis.get(f"{cfg.memory_index}-vec_num")
self.vec_num = int(existing_vec_num.decode("utf-8")) if existing_vec_num else 0
def add(self, data: str) -> str:
"""
@@ -69,24 +86,22 @@ class RedisMemory(MemoryProviderSingleton):
Returns: Message indicating that the data has been added.
"""
if 'Command Error:' in data:
if "Command Error:" in data:
return ""
vector = get_ada_embedding(data)
vector = np.array(vector).astype(np.float32).tobytes()
data_dict = {
b"data": data,
"embedding": vector
}
data_dict = {b"data": data, "embedding": vector}
pipe = self.redis.pipeline()
pipe.hset(f"{self.cfg.memory_index}:{self.vec_num}", mapping=data_dict)
_text = f"Inserting data into memory at index: {self.vec_num}:\n"\
f"data: {data}"
_text = (
f"Inserting data into memory at index: {self.vec_num}:\n" f"data: {data}"
)
self.vec_num += 1
pipe.set(f'{self.cfg.memory_index}-vec_num', self.vec_num)
pipe.set(f"{self.cfg.memory_index}-vec_num", self.vec_num)
pipe.execute()
return _text
def get(self, data: str) -> Optional[List[Any]]:
def get(self, data: str) -> list[Any] | None:
"""
Gets the data from the memory that is most relevant to the given data.
@@ -106,11 +121,7 @@ class RedisMemory(MemoryProviderSingleton):
self.redis.flushall()
return "Obliviated"
def get_relevant(
self,
data: str,
num_relevant: int = 5
) -> Optional[List[Any]]:
def get_relevant(self, data: str, num_relevant: int = 5) -> list[Any] | None:
"""
Returns all the data in the memory that is relevant to the given data.
Args:
@@ -121,10 +132,12 @@ class RedisMemory(MemoryProviderSingleton):
"""
query_embedding = get_ada_embedding(data)
base_query = f"*=>[KNN {num_relevant} @embedding $vector AS vector_score]"
query = Query(base_query).return_fields(
"data",
"vector_score"
).sort_by("vector_score").dialect(2)
query = (
Query(base_query)
.return_fields("data", "vector_score")
.sort_by("vector_score")
.dialect(2)
)
query_vector = np.array(query_embedding).astype(np.float32).tobytes()
try:
@@ -132,7 +145,7 @@ class RedisMemory(MemoryProviderSingleton):
query, query_params={"vector": query_vector}
)
except Exception as e:
print("Error calling Redis search: ", e)
logger.warn("Error calling Redis search: ", e)
return None
return [result.data for result in results.docs]

127
autogpt/memory/weaviate.py Normal file
View File

@@ -0,0 +1,127 @@
import weaviate
from weaviate import Client
from weaviate.embedded import EmbeddedOptions
from weaviate.util import generate_uuid5
from autogpt.llm import get_ada_embedding
from autogpt.logs import logger
from autogpt.memory.base import MemoryProviderSingleton
def default_schema(weaviate_index):
return {
"class": weaviate_index,
"properties": [
{
"name": "raw_text",
"dataType": ["text"],
"description": "original text for the embedding",
}
],
}
class WeaviateMemory(MemoryProviderSingleton):
def __init__(self, cfg):
auth_credentials = self._build_auth_credentials(cfg)
url = f"{cfg.weaviate_protocol}://{cfg.weaviate_host}:{cfg.weaviate_port}"
if cfg.use_weaviate_embedded:
self.client = Client(
embedded_options=EmbeddedOptions(
hostname=cfg.weaviate_host,
port=int(cfg.weaviate_port),
persistence_data_path=cfg.weaviate_embedded_path,
)
)
logger.info(
f"Weaviate Embedded running on: {url} with persistence path: {cfg.weaviate_embedded_path}"
)
else:
self.client = Client(url, auth_client_secret=auth_credentials)
self.index = WeaviateMemory.format_classname(cfg.memory_index)
self._create_schema()
@staticmethod
def format_classname(index):
# weaviate uses capitalised index names
# The python client uses the following code to format
# index names before the corresponding class is created
index = index.replace("-", "_")
if len(index) == 1:
return index.capitalize()
return index[0].capitalize() + index[1:]
def _create_schema(self):
schema = default_schema(self.index)
if not self.client.schema.contains(schema):
self.client.schema.create_class(schema)
def _build_auth_credentials(self, cfg):
if cfg.weaviate_username and cfg.weaviate_password:
return weaviate.AuthClientPassword(
cfg.weaviate_username, cfg.weaviate_password
)
if cfg.weaviate_api_key:
return weaviate.AuthApiKey(api_key=cfg.weaviate_api_key)
else:
return None
def add(self, data):
vector = get_ada_embedding(data)
doc_uuid = generate_uuid5(data, self.index)
data_object = {"raw_text": data}
with self.client.batch as batch:
batch.add_data_object(
uuid=doc_uuid,
data_object=data_object,
class_name=self.index,
vector=vector,
)
return f"Inserting data into memory at uuid: {doc_uuid}:\n data: {data}"
def get(self, data):
return self.get_relevant(data, 1)
def clear(self):
self.client.schema.delete_all()
# weaviate does not yet have a neat way to just remove the items in an index
# without removing the entire schema, therefore we need to re-create it
# after a call to delete_all
self._create_schema()
return "Obliterated"
def get_relevant(self, data, num_relevant=5):
query_embedding = get_ada_embedding(data)
try:
results = (
self.client.query.get(self.index, ["raw_text"])
.with_near_vector({"vector": query_embedding, "certainty": 0.7})
.with_limit(num_relevant)
.do()
)
if len(results["data"]["Get"][self.index]) > 0:
return [
str(item["raw_text"]) for item in results["data"]["Get"][self.index]
]
else:
return []
except Exception as err:
logger.warn(f"Unexpected error {err=}, {type(err)=}")
return []
def get_stats(self):
result = self.client.query.aggregate(self.index).with_meta_count().do()
class_data = result["data"]["Aggregate"][self.index]
return class_data[0]["meta"] if class_data else {}

View File

@@ -0,0 +1,33 @@
from autogpt.json_utils.utilities import (
LLM_DEFAULT_RESPONSE_FORMAT,
is_string_valid_json,
)
from autogpt.logs import logger
def format_memory(assistant_reply, next_message_content):
# the next_message_content is a variable to stores either the user_input or the command following the assistant_reply
result = (
"None" if next_message_content.startswith("Command") else next_message_content
)
user_input = (
"None"
if next_message_content.startswith("Human feedback")
else next_message_content
)
return f"Assistant Reply: {assistant_reply}\nResult: {result}\nHuman Feedback:{user_input}"
def save_memory_trimmed_from_context_window(
full_message_history, next_message_to_add_index, permanent_memory
):
while next_message_to_add_index >= 0:
message_content = full_message_history[next_message_to_add_index]["content"]
if is_string_valid_json(message_content, LLM_DEFAULT_RESPONSE_FORMAT):
next_message = full_message_history[next_message_to_add_index + 1]
memory_to_add = format_memory(message_content, next_message["content"])
logger.debug(f"Storing the following memory: {memory_to_add}")
permanent_memory.add(memory_to_add)
next_message_to_add_index -= 1

View File

@@ -0,0 +1,112 @@
import json
from typing import Dict, List, Tuple
from autogpt.config import Config
from autogpt.llm.llm_utils import create_chat_completion
cfg = Config()
def get_newly_trimmed_messages(
full_message_history: List[Dict[str, str]],
current_context: List[Dict[str, str]],
last_memory_index: int,
) -> Tuple[List[Dict[str, str]], int]:
"""
This function returns a list of dictionaries contained in full_message_history
with an index higher than prev_index that are absent from current_context.
Args:
full_message_history (list): A list of dictionaries representing the full message history.
current_context (list): A list of dictionaries representing the current context.
last_memory_index (int): An integer representing the previous index.
Returns:
list: A list of dictionaries that are in full_message_history with an index higher than last_memory_index and absent from current_context.
int: The new index value for use in the next loop.
"""
# Select messages in full_message_history with an index higher than last_memory_index
new_messages = [
msg for i, msg in enumerate(full_message_history) if i > last_memory_index
]
# Remove messages that are already present in current_context
new_messages_not_in_context = [
msg for msg in new_messages if msg not in current_context
]
# Find the index of the last message processed
new_index = last_memory_index
if new_messages_not_in_context:
last_message = new_messages_not_in_context[-1]
new_index = full_message_history.index(last_message)
return new_messages_not_in_context, new_index
def update_running_summary(current_memory: str, new_events: List[Dict]) -> str:
"""
This function takes a list of dictionaries representing new events and combines them with the current summary,
focusing on key and potentially important information to remember. The updated summary is returned in a message
formatted in the 1st person past tense.
Args:
new_events (List[Dict]): A list of dictionaries containing the latest events to be added to the summary.
Returns:
str: A message containing the updated summary of actions, formatted in the 1st person past tense.
Example:
new_events = [{"event": "entered the kitchen."}, {"event": "found a scrawled note with the number 7"}]
update_running_summary(new_events)
# Returns: "This reminds you of these events from your past: \nI entered the kitchen and found a scrawled note saying 7."
"""
# Replace "assistant" with "you". This produces much better first person past tense results.
for event in new_events:
if event["role"].lower() == "assistant":
event["role"] = "you"
# Remove "thoughts" dictionary from "content"
content_dict = json.loads(event["content"])
if "thoughts" in content_dict:
del content_dict["thoughts"]
event["content"] = json.dumps(content_dict)
elif event["role"].lower() == "system":
event["role"] = "your computer"
# Delete all user messages
elif event["role"] == "user":
new_events.remove(event)
# This can happen at any point during execturion, not just the beginning
if len(new_events) == 0:
new_events = "Nothing new happened."
prompt = f'''Your task is to create a concise running summary of actions and information results in the provided text, focusing on key and potentially important information to remember.
You will receive the current summary and the your latest actions. Combine them, adding relevant key information from the latest development in 1st person past tense and keeping the summary concise.
Summary So Far:
"""
{current_memory}
"""
Latest Development:
"""
{new_events}
"""
'''
messages = [
{
"role": "user",
"content": prompt,
}
]
current_memory = create_chat_completion(messages, cfg.fast_llm_model)
message_to_return = {
"role": "system",
"content": f"This reminds you of these events from your past: \n{current_memory}",
}
return message_to_return

View File

@@ -0,0 +1,199 @@
"""Handles loading of plugins."""
from typing import Any, Dict, List, Optional, Tuple, TypedDict, TypeVar
from auto_gpt_plugin_template import AutoGPTPluginTemplate
PromptGenerator = TypeVar("PromptGenerator")
class Message(TypedDict):
role: str
content: str
class BaseOpenAIPlugin(AutoGPTPluginTemplate):
"""
This is a BaseOpenAIPlugin class for generating Auto-GPT plugins.
"""
def __init__(self, manifests_specs_clients: dict):
# super().__init__()
self._name = manifests_specs_clients["manifest"]["name_for_model"]
self._version = manifests_specs_clients["manifest"]["schema_version"]
self._description = manifests_specs_clients["manifest"]["description_for_model"]
self._client = manifests_specs_clients["client"]
self._manifest = manifests_specs_clients["manifest"]
self._openapi_spec = manifests_specs_clients["openapi_spec"]
def can_handle_on_response(self) -> bool:
"""This method is called to check that the plugin can
handle the on_response method.
Returns:
bool: True if the plugin can handle the on_response method."""
return False
def on_response(self, response: str, *args, **kwargs) -> str:
"""This method is called when a response is received from the model."""
return response
def can_handle_post_prompt(self) -> bool:
"""This method is called to check that the plugin can
handle the post_prompt method.
Returns:
bool: True if the plugin can handle the post_prompt method."""
return False
def post_prompt(self, prompt: PromptGenerator) -> PromptGenerator:
"""This method is called just after the generate_prompt is called,
but actually before the prompt is generated.
Args:
prompt (PromptGenerator): The prompt generator.
Returns:
PromptGenerator: The prompt generator.
"""
return prompt
def can_handle_on_planning(self) -> bool:
"""This method is called to check that the plugin can
handle the on_planning method.
Returns:
bool: True if the plugin can handle the on_planning method."""
return False
def on_planning(
self, prompt: PromptGenerator, messages: List[Message]
) -> Optional[str]:
"""This method is called before the planning chat completion is done.
Args:
prompt (PromptGenerator): The prompt generator.
messages (List[str]): The list of messages.
"""
pass
def can_handle_post_planning(self) -> bool:
"""This method is called to check that the plugin can
handle the post_planning method.
Returns:
bool: True if the plugin can handle the post_planning method."""
return False
def post_planning(self, response: str) -> str:
"""This method is called after the planning chat completion is done.
Args:
response (str): The response.
Returns:
str: The resulting response.
"""
return response
def can_handle_pre_instruction(self) -> bool:
"""This method is called to check that the plugin can
handle the pre_instruction method.
Returns:
bool: True if the plugin can handle the pre_instruction method."""
return False
def pre_instruction(self, messages: List[Message]) -> List[Message]:
"""This method is called before the instruction chat is done.
Args:
messages (List[Message]): The list of context messages.
Returns:
List[Message]: The resulting list of messages.
"""
return messages
def can_handle_on_instruction(self) -> bool:
"""This method is called to check that the plugin can
handle the on_instruction method.
Returns:
bool: True if the plugin can handle the on_instruction method."""
return False
def on_instruction(self, messages: List[Message]) -> Optional[str]:
"""This method is called when the instruction chat is done.
Args:
messages (List[Message]): The list of context messages.
Returns:
Optional[str]: The resulting message.
"""
pass
def can_handle_post_instruction(self) -> bool:
"""This method is called to check that the plugin can
handle the post_instruction method.
Returns:
bool: True if the plugin can handle the post_instruction method."""
return False
def post_instruction(self, response: str) -> str:
"""This method is called after the instruction chat is done.
Args:
response (str): The response.
Returns:
str: The resulting response.
"""
return response
def can_handle_pre_command(self) -> bool:
"""This method is called to check that the plugin can
handle the pre_command method.
Returns:
bool: True if the plugin can handle the pre_command method."""
return False
def pre_command(
self, command_name: str, arguments: Dict[str, Any]
) -> Tuple[str, Dict[str, Any]]:
"""This method is called before the command is executed.
Args:
command_name (str): The command name.
arguments (Dict[str, Any]): The arguments.
Returns:
Tuple[str, Dict[str, Any]]: The command name and the arguments.
"""
return command_name, arguments
def can_handle_post_command(self) -> bool:
"""This method is called to check that the plugin can
handle the post_command method.
Returns:
bool: True if the plugin can handle the post_command method."""
return False
def post_command(self, command_name: str, response: str) -> str:
"""This method is called after the command is executed.
Args:
command_name (str): The command name.
response (str): The response.
Returns:
str: The resulting response.
"""
return response
def can_handle_chat_completion(
self, messages: Dict[Any, Any], model: str, temperature: float, max_tokens: int
) -> bool:
"""This method is called to check that the plugin can
handle the chat_completion method.
Args:
messages (List[Message]): The messages.
model (str): The model name.
temperature (float): The temperature.
max_tokens (int): The max tokens.
Returns:
bool: True if the plugin can handle the chat_completion method."""
return False
def handle_chat_completion(
self, messages: List[Message], model: str, temperature: float, max_tokens: int
) -> str:
"""This method is called when the chat completion is done.
Args:
messages (List[Message]): The messages.
model (str): The model name.
temperature (float): The temperature.
max_tokens (int): The max tokens.
Returns:
str: The resulting response.
"""
pass

268
autogpt/plugins.py Normal file
View File

@@ -0,0 +1,268 @@
"""Handles loading of plugins."""
import importlib
import json
import os
import zipfile
from pathlib import Path
from typing import List, Optional, Tuple
from urllib.parse import urlparse
from zipimport import zipimporter
import openapi_python_client
import requests
from auto_gpt_plugin_template import AutoGPTPluginTemplate
from openapi_python_client.cli import Config as OpenAPIConfig
from autogpt.config import Config
from autogpt.logs import logger
from autogpt.models.base_open_ai_plugin import BaseOpenAIPlugin
def inspect_zip_for_modules(zip_path: str, debug: bool = False) -> list[str]:
"""
Inspect a zipfile for a modules.
Args:
zip_path (str): Path to the zipfile.
debug (bool, optional): Enable debug logging. Defaults to False.
Returns:
list[str]: The list of module names found or empty list if none were found.
"""
result = []
with zipfile.ZipFile(zip_path, "r") as zfile:
for name in zfile.namelist():
if name.endswith("__init__.py"):
logger.debug(f"Found module '{name}' in the zipfile at: {name}")
result.append(name)
if len(result) == 0:
logger.debug(f"Module '__init__.py' not found in the zipfile @ {zip_path}.")
return result
def write_dict_to_json_file(data: dict, file_path: str) -> None:
"""
Write a dictionary to a JSON file.
Args:
data (dict): Dictionary to write.
file_path (str): Path to the file.
"""
with open(file_path, "w") as file:
json.dump(data, file, indent=4)
def fetch_openai_plugins_manifest_and_spec(cfg: Config) -> dict:
"""
Fetch the manifest for a list of OpenAI plugins.
Args:
urls (List): List of URLs to fetch.
Returns:
dict: per url dictionary of manifest and spec.
"""
# TODO add directory scan
manifests = {}
for url in cfg.plugins_openai:
openai_plugin_client_dir = f"{cfg.plugins_dir}/openai/{urlparse(url).netloc}"
create_directory_if_not_exists(openai_plugin_client_dir)
if not os.path.exists(f"{openai_plugin_client_dir}/ai-plugin.json"):
try:
response = requests.get(f"{url}/.well-known/ai-plugin.json")
if response.status_code == 200:
manifest = response.json()
if manifest["schema_version"] != "v1":
logger.warn(
f"Unsupported manifest version: {manifest['schem_version']} for {url}"
)
continue
if manifest["api"]["type"] != "openapi":
logger.warn(
f"Unsupported API type: {manifest['api']['type']} for {url}"
)
continue
write_dict_to_json_file(
manifest, f"{openai_plugin_client_dir}/ai-plugin.json"
)
else:
logger.warn(
f"Failed to fetch manifest for {url}: {response.status_code}"
)
except requests.exceptions.RequestException as e:
logger.warn(f"Error while requesting manifest from {url}: {e}")
else:
logger.info(f"Manifest for {url} already exists")
manifest = json.load(open(f"{openai_plugin_client_dir}/ai-plugin.json"))
if not os.path.exists(f"{openai_plugin_client_dir}/openapi.json"):
openapi_spec = openapi_python_client._get_document(
url=manifest["api"]["url"], path=None, timeout=5
)
write_dict_to_json_file(
openapi_spec, f"{openai_plugin_client_dir}/openapi.json"
)
else:
logger.info(f"OpenAPI spec for {url} already exists")
openapi_spec = json.load(open(f"{openai_plugin_client_dir}/openapi.json"))
manifests[url] = {"manifest": manifest, "openapi_spec": openapi_spec}
return manifests
def create_directory_if_not_exists(directory_path: str) -> bool:
"""
Create a directory if it does not exist.
Args:
directory_path (str): Path to the directory.
Returns:
bool: True if the directory was created, else False.
"""
if not os.path.exists(directory_path):
try:
os.makedirs(directory_path)
logger.debug(f"Created directory: {directory_path}")
return True
except OSError as e:
logger.warn(f"Error creating directory {directory_path}: {e}")
return False
else:
logger.info(f"Directory {directory_path} already exists")
return True
def initialize_openai_plugins(
manifests_specs: dict, cfg: Config, debug: bool = False
) -> dict:
"""
Initialize OpenAI plugins.
Args:
manifests_specs (dict): per url dictionary of manifest and spec.
cfg (Config): Config instance including plugins config
debug (bool, optional): Enable debug logging. Defaults to False.
Returns:
dict: per url dictionary of manifest, spec and client.
"""
openai_plugins_dir = f"{cfg.plugins_dir}/openai"
if create_directory_if_not_exists(openai_plugins_dir):
for url, manifest_spec in manifests_specs.items():
openai_plugin_client_dir = f"{openai_plugins_dir}/{urlparse(url).hostname}"
_meta_option = (openapi_python_client.MetaType.SETUP,)
_config = OpenAPIConfig(
**{
"project_name_override": "client",
"package_name_override": "client",
}
)
prev_cwd = Path.cwd()
os.chdir(openai_plugin_client_dir)
Path("ai-plugin.json")
if not os.path.exists("client"):
client_results = openapi_python_client.create_new_client(
url=manifest_spec["manifest"]["api"]["url"],
path=None,
meta=_meta_option,
config=_config,
)
if client_results:
logger.warn(
f"Error creating OpenAPI client: {client_results[0].header} \n"
f" details: {client_results[0].detail}"
)
continue
spec = importlib.util.spec_from_file_location(
"client", "client/client/client.py"
)
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
client = module.Client(base_url=url)
os.chdir(prev_cwd)
manifest_spec["client"] = client
return manifests_specs
def instantiate_openai_plugin_clients(
manifests_specs_clients: dict, cfg: Config, debug: bool = False
) -> dict:
"""
Instantiates BaseOpenAIPlugin instances for each OpenAI plugin.
Args:
manifests_specs_clients (dict): per url dictionary of manifest, spec and client.
cfg (Config): Config instance including plugins config
debug (bool, optional): Enable debug logging. Defaults to False.
Returns:
plugins (dict): per url dictionary of BaseOpenAIPlugin instances.
"""
plugins = {}
for url, manifest_spec_client in manifests_specs_clients.items():
plugins[url] = BaseOpenAIPlugin(manifest_spec_client)
return plugins
def scan_plugins(cfg: Config, debug: bool = False) -> List[AutoGPTPluginTemplate]:
"""Scan the plugins directory for plugins and loads them.
Args:
cfg (Config): Config instance including plugins config
debug (bool, optional): Enable debug logging. Defaults to False.
Returns:
List[Tuple[str, Path]]: List of plugins.
"""
loaded_plugins = []
# Generic plugins
plugins_path_path = Path(cfg.plugins_dir)
for plugin in plugins_path_path.glob("*.zip"):
if moduleList := inspect_zip_for_modules(str(plugin), debug):
for module in moduleList:
plugin = Path(plugin)
module = Path(module)
logger.debug(f"Plugin: {plugin} Module: {module}")
zipped_package = zipimporter(str(plugin))
zipped_module = zipped_package.load_module(str(module.parent))
for key in dir(zipped_module):
if key.startswith("__"):
continue
a_module = getattr(zipped_module, key)
a_keys = dir(a_module)
if (
"_abc_impl" in a_keys
and a_module.__name__ != "AutoGPTPluginTemplate"
and denylist_allowlist_check(a_module.__name__, cfg)
):
loaded_plugins.append(a_module())
# OpenAI plugins
if cfg.plugins_openai:
manifests_specs = fetch_openai_plugins_manifest_and_spec(cfg)
if manifests_specs.keys():
manifests_specs_clients = initialize_openai_plugins(
manifests_specs, cfg, debug
)
for url, openai_plugin_meta in manifests_specs_clients.items():
if denylist_allowlist_check(url, cfg):
plugin = BaseOpenAIPlugin(openai_plugin_meta)
loaded_plugins.append(plugin)
if loaded_plugins:
logger.info(f"\nPlugins found: {len(loaded_plugins)}\n" "--------------------")
for plugin in loaded_plugins:
logger.info(f"{plugin._name}: {plugin._version} - {plugin._description}")
return loaded_plugins
def denylist_allowlist_check(plugin_name: str, cfg: Config) -> bool:
"""Check if the plugin is in the allowlist or denylist.
Args:
plugin_name (str): Name of the plugin.
cfg (Config): Config object.
Returns:
True or False
"""
if plugin_name in cfg.plugins_denylist:
return False
if plugin_name in cfg.plugins_allowlist:
return True
ack = input(
f"WARNING: Plugin {plugin_name} found. But not in the"
f" allowlist... Load? ({cfg.authorise_key}/{cfg.exit_key}): "
)
return ack.lower() == cfg.authorise_key

View File

View File

@@ -0,0 +1,33 @@
"""HTML processing functions"""
from __future__ import annotations
from bs4 import BeautifulSoup
from requests.compat import urljoin
def extract_hyperlinks(soup: BeautifulSoup, base_url: str) -> list[tuple[str, str]]:
"""Extract hyperlinks from a BeautifulSoup object
Args:
soup (BeautifulSoup): The BeautifulSoup object
base_url (str): The base URL
Returns:
List[Tuple[str, str]]: The extracted hyperlinks
"""
return [
(link.text, urljoin(base_url, link["href"]))
for link in soup.find_all("a", href=True)
]
def format_hyperlinks(hyperlinks: list[tuple[str, str]]) -> list[str]:
"""Format hyperlinks to be displayed to the user
Args:
hyperlinks (List[Tuple[str, str]]): The hyperlinks to format
Returns:
List[str]: The formatted hyperlinks
"""
return [f"{link_text} ({link_url})" for link_text, link_url in hyperlinks]

170
autogpt/processing/text.py Normal file
View File

@@ -0,0 +1,170 @@
"""Text processing functions"""
from typing import Dict, Generator, Optional
import spacy
from selenium.webdriver.remote.webdriver import WebDriver
from autogpt.config import Config
from autogpt.llm import count_message_tokens, create_chat_completion
from autogpt.logs import logger
from autogpt.memory import get_memory
CFG = Config()
def split_text(
text: str,
max_length: int = CFG.browse_chunk_max_length,
model: str = CFG.fast_llm_model,
question: str = "",
) -> Generator[str, None, None]:
"""Split text into chunks of a maximum length
Args:
text (str): The text to split
max_length (int, optional): The maximum length of each chunk. Defaults to 8192.
Yields:
str: The next chunk of text
Raises:
ValueError: If the text is longer than the maximum length
"""
flatened_paragraphs = " ".join(text.split("\n"))
nlp = spacy.load(CFG.browse_spacy_language_model)
nlp.add_pipe("sentencizer")
doc = nlp(flatened_paragraphs)
sentences = [sent.text.strip() for sent in doc.sents]
current_chunk = []
for sentence in sentences:
message_with_additional_sentence = [
create_message(" ".join(current_chunk) + " " + sentence, question)
]
expected_token_usage = (
count_message_tokens(messages=message_with_additional_sentence, model=model)
+ 1
)
if expected_token_usage <= max_length:
current_chunk.append(sentence)
else:
yield " ".join(current_chunk)
current_chunk = [sentence]
message_this_sentence_only = [
create_message(" ".join(current_chunk), question)
]
expected_token_usage = (
count_message_tokens(messages=message_this_sentence_only, model=model)
+ 1
)
if expected_token_usage > max_length:
raise ValueError(
f"Sentence is too long in webpage: {expected_token_usage} tokens."
)
if current_chunk:
yield " ".join(current_chunk)
def summarize_text(
url: str, text: str, question: str, driver: Optional[WebDriver] = None
) -> str:
"""Summarize text using the OpenAI API
Args:
url (str): The url of the text
text (str): The text to summarize
question (str): The question to ask the model
driver (WebDriver): The webdriver to use to scroll the page
Returns:
str: The summary of the text
"""
if not text:
return "Error: No text to summarize"
model = CFG.fast_llm_model
text_length = len(text)
logger.info(f"Text length: {text_length} characters")
summaries = []
chunks = list(
split_text(
text, max_length=CFG.browse_chunk_max_length, model=model, question=question
),
)
scroll_ratio = 1 / len(chunks)
for i, chunk in enumerate(chunks):
if driver:
scroll_to_percentage(driver, scroll_ratio * i)
logger.info(f"Adding chunk {i + 1} / {len(chunks)} to memory")
memory_to_add = f"Source: {url}\n" f"Raw content part#{i + 1}: {chunk}"
memory = get_memory(CFG)
memory.add(memory_to_add)
messages = [create_message(chunk, question)]
tokens_for_chunk = count_message_tokens(messages, model)
logger.info(
f"Summarizing chunk {i + 1} / {len(chunks)} of length {len(chunk)} characters, or {tokens_for_chunk} tokens"
)
summary = create_chat_completion(
model=model,
messages=messages,
)
summaries.append(summary)
logger.info(
f"Added chunk {i + 1} summary to memory, of length {len(summary)} characters"
)
memory_to_add = f"Source: {url}\n" f"Content summary part#{i + 1}: {summary}"
memory.add(memory_to_add)
logger.info(f"Summarized {len(chunks)} chunks.")
combined_summary = "\n".join(summaries)
messages = [create_message(combined_summary, question)]
return create_chat_completion(
model=model,
messages=messages,
)
def scroll_to_percentage(driver: WebDriver, ratio: float) -> None:
"""Scroll to a percentage of the page
Args:
driver (WebDriver): The webdriver to use
ratio (float): The percentage to scroll to
Raises:
ValueError: If the ratio is not between 0 and 1
"""
if ratio < 0 or ratio > 1:
raise ValueError("Percentage should be between 0 and 1")
driver.execute_script(f"window.scrollTo(0, document.body.scrollHeight * {ratio});")
def create_message(chunk: str, question: str) -> Dict[str, str]:
"""Create a message for the chat completion
Args:
chunk (str): The chunk of text to summarize
question (str): The question to answer
Returns:
Dict[str, str]: The message to send to the chat completion
"""
return {
"role": "user",
"content": f'"""{chunk}""" Using the above text, answer the following'
f' question: "{question}" -- if the question cannot be answered using the text,'
" summarize the text.",
}

View File

View File

@@ -0,0 +1,155 @@
""" A module for generating custom prompt strings."""
import json
from typing import Any, Callable, Dict, List, Optional
class PromptGenerator:
"""
A class for generating custom prompt strings based on constraints, commands,
resources, and performance evaluations.
"""
def __init__(self) -> None:
"""
Initialize the PromptGenerator object with empty lists of constraints,
commands, resources, and performance evaluations.
"""
self.constraints = []
self.commands = []
self.resources = []
self.performance_evaluation = []
self.goals = []
self.command_registry = None
self.name = "Bob"
self.role = "AI"
self.response_format = {
"thoughts": {
"text": "thought",
"reasoning": "reasoning",
"plan": "- short bulleted\n- list that conveys\n- long-term plan",
"criticism": "constructive self-criticism",
"speak": "thoughts summary to say to user",
},
"command": {"name": "command name", "args": {"arg name": "value"}},
}
def add_constraint(self, constraint: str) -> None:
"""
Add a constraint to the constraints list.
Args:
constraint (str): The constraint to be added.
"""
self.constraints.append(constraint)
def add_command(
self,
command_label: str,
command_name: str,
args=None,
function: Optional[Callable] = None,
) -> None:
"""
Add a command to the commands list with a label, name, and optional arguments.
Args:
command_label (str): The label of the command.
command_name (str): The name of the command.
args (dict, optional): A dictionary containing argument names and their
values. Defaults to None.
function (callable, optional): A callable function to be called when
the command is executed. Defaults to None.
"""
if args is None:
args = {}
command_args = {arg_key: arg_value for arg_key, arg_value in args.items()}
command = {
"label": command_label,
"name": command_name,
"args": command_args,
"function": function,
}
self.commands.append(command)
def _generate_command_string(self, command: Dict[str, Any]) -> str:
"""
Generate a formatted string representation of a command.
Args:
command (dict): A dictionary containing command information.
Returns:
str: The formatted command string.
"""
args_string = ", ".join(
f'"{key}": "{value}"' for key, value in command["args"].items()
)
return f'{command["label"]}: "{command["name"]}", args: {args_string}'
def add_resource(self, resource: str) -> None:
"""
Add a resource to the resources list.
Args:
resource (str): The resource to be added.
"""
self.resources.append(resource)
def add_performance_evaluation(self, evaluation: str) -> None:
"""
Add a performance evaluation item to the performance_evaluation list.
Args:
evaluation (str): The evaluation item to be added.
"""
self.performance_evaluation.append(evaluation)
def _generate_numbered_list(self, items: List[Any], item_type="list") -> str:
"""
Generate a numbered list from given items based on the item_type.
Args:
items (list): A list of items to be numbered.
item_type (str, optional): The type of items in the list.
Defaults to 'list'.
Returns:
str: The formatted numbered list.
"""
if item_type == "command":
command_strings = []
if self.command_registry:
command_strings += [
str(item)
for item in self.command_registry.commands.values()
if item.enabled
]
# terminate command is added manually
command_strings += [self._generate_command_string(item) for item in items]
return "\n".join(f"{i+1}. {item}" for i, item in enumerate(command_strings))
else:
return "\n".join(f"{i+1}. {item}" for i, item in enumerate(items))
def generate_prompt_string(self) -> str:
"""
Generate a prompt string based on the constraints, commands, resources,
and performance evaluations.
Returns:
str: The generated prompt string.
"""
formatted_response_format = json.dumps(self.response_format, indent=4)
return (
f"Constraints:\n{self._generate_numbered_list(self.constraints)}\n\n"
"Commands:\n"
f"{self._generate_numbered_list(self.commands, item_type='command')}\n\n"
f"Resources:\n{self._generate_numbered_list(self.resources)}\n\n"
"Performance Evaluation:\n"
f"{self._generate_numbered_list(self.performance_evaluation)}\n\n"
"You should only respond in JSON format as described below \nResponse"
f" Format: \n{formatted_response_format} \nEnsure the response can be"
" parsed by Python json.loads"
)

142
autogpt/prompts/prompt.py Normal file
View File

@@ -0,0 +1,142 @@
from colorama import Fore
from autogpt.config.ai_config import AIConfig
from autogpt.config.config import Config
from autogpt.llm import ApiManager
from autogpt.logs import logger
from autogpt.prompts.generator import PromptGenerator
from autogpt.setup import prompt_user
from autogpt.utils import clean_input
CFG = Config()
DEFAULT_TRIGGERING_PROMPT = (
"Determine which next command to use, and respond using the format specified above:"
)
def build_default_prompt_generator() -> PromptGenerator:
"""
This function generates a prompt string that includes various constraints,
commands, resources, and performance evaluations.
Returns:
str: The generated prompt string.
"""
# Initialize the PromptGenerator object
prompt_generator = PromptGenerator()
# Add constraints to the PromptGenerator object
prompt_generator.add_constraint(
"~4000 word limit for short term memory. Your short term memory is short, so"
" immediately save important information to files."
)
prompt_generator.add_constraint(
"If you are unsure how you previously did something or want to recall past"
" events, thinking about similar events will help you remember."
)
prompt_generator.add_constraint("No user assistance")
prompt_generator.add_constraint(
'Exclusively use the commands listed in double quotes e.g. "command name"'
)
# Define the command list
commands = [
("Task Complete (Shutdown)", "task_complete", {"reason": "<reason>"}),
]
# Add commands to the PromptGenerator object
for command_label, command_name, args in commands:
prompt_generator.add_command(command_label, command_name, args)
# Add resources to the PromptGenerator object
prompt_generator.add_resource(
"Internet access for searches and information gathering."
)
prompt_generator.add_resource("Long Term memory management.")
prompt_generator.add_resource(
"GPT-3.5 powered Agents for delegation of simple tasks."
)
prompt_generator.add_resource("File output.")
# Add performance evaluations to the PromptGenerator object
prompt_generator.add_performance_evaluation(
"Continuously review and analyze your actions to ensure you are performing to"
" the best of your abilities."
)
prompt_generator.add_performance_evaluation(
"Constructively self-criticize your big-picture behavior constantly."
)
prompt_generator.add_performance_evaluation(
"Reflect on past decisions and strategies to refine your approach."
)
prompt_generator.add_performance_evaluation(
"Every command has a cost, so be smart and efficient. Aim to complete tasks in"
" the least number of steps."
)
prompt_generator.add_performance_evaluation("Write all code to a file.")
return prompt_generator
def construct_main_ai_config() -> AIConfig:
"""Construct the prompt for the AI to respond to
Returns:
str: The prompt string
"""
config = AIConfig.load(CFG.ai_settings_file)
if CFG.skip_reprompt and config.ai_name:
logger.typewriter_log("Name :", Fore.GREEN, config.ai_name)
logger.typewriter_log("Role :", Fore.GREEN, config.ai_role)
logger.typewriter_log("Goals:", Fore.GREEN, f"{config.ai_goals}")
logger.typewriter_log(
"API Budget:",
Fore.GREEN,
"infinite" if config.api_budget <= 0 else f"${config.api_budget}",
)
elif config.ai_name:
logger.typewriter_log(
"Welcome back! ",
Fore.GREEN,
f"Would you like me to return to being {config.ai_name}?",
speak_text=True,
)
should_continue = clean_input(
f"""Continue with the last settings?
Name: {config.ai_name}
Role: {config.ai_role}
Goals: {config.ai_goals}
API Budget: {"infinite" if config.api_budget <= 0 else f"${config.api_budget}"}
Continue ({CFG.authorise_key}/{CFG.exit_key}): """
)
if should_continue.lower() == CFG.exit_key:
config = AIConfig()
if not config.ai_name:
config = prompt_user()
config.save(CFG.ai_settings_file)
# set the total api budget
api_manager = ApiManager()
api_manager.set_total_budget(config.api_budget)
# Agent Created, print message
logger.typewriter_log(
config.ai_name,
Fore.LIGHTBLUE_EX,
"has been created with the following details:",
speak_text=True,
)
# Print the ai config details
# Name
logger.typewriter_log("Name:", Fore.GREEN, config.ai_name, speak_text=False)
# Role
logger.typewriter_log("Role:", Fore.GREEN, config.ai_role, speak_text=False)
# Goals
logger.typewriter_log("Goals:", Fore.GREEN, "", speak_text=False)
for goal in config.ai_goals:
logger.typewriter_log("-", Fore.GREEN, goal, speak_text=False)
return config

218
autogpt/setup.py Normal file
View File

@@ -0,0 +1,218 @@
"""Set up the AI and its goals"""
import re
from colorama import Fore, Style
from autogpt import utils
from autogpt.config import Config
from autogpt.config.ai_config import AIConfig
from autogpt.llm import create_chat_completion
from autogpt.logs import logger
CFG = Config()
def prompt_user() -> AIConfig:
"""Prompt the user for input
Returns:
AIConfig: The AIConfig object tailored to the user's input
"""
ai_name = ""
ai_config = None
# Construct the prompt
logger.typewriter_log(
"Welcome to Auto-GPT! ",
Fore.GREEN,
"run with '--help' for more information.",
speak_text=True,
)
# Get user desire
logger.typewriter_log(
"Create an AI-Assistant:",
Fore.GREEN,
"input '--manual' to enter manual mode.",
speak_text=True,
)
user_desire = utils.clean_input(
f"{Fore.LIGHTBLUE_EX}I want Auto-GPT to{Style.RESET_ALL}: "
)
if user_desire == "":
user_desire = "Write a wikipedia style article about the project: https://github.com/significant-gravitas/Auto-GPT" # Default prompt
# If user desire contains "--manual"
if "--manual" in user_desire:
logger.typewriter_log(
"Manual Mode Selected",
Fore.GREEN,
speak_text=True,
)
return generate_aiconfig_manual()
else:
try:
return generate_aiconfig_automatic(user_desire)
except Exception as e:
logger.typewriter_log(
"Unable to automatically generate AI Config based on user desire.",
Fore.RED,
"Falling back to manual mode.",
speak_text=True,
)
return generate_aiconfig_manual()
def generate_aiconfig_manual() -> AIConfig:
"""
Interactively create an AI configuration by prompting the user to provide the name, role, and goals of the AI.
This function guides the user through a series of prompts to collect the necessary information to create
an AIConfig object. The user will be asked to provide a name and role for the AI, as well as up to five
goals. If the user does not provide a value for any of the fields, default values will be used.
Returns:
AIConfig: An AIConfig object containing the user-defined or default AI name, role, and goals.
"""
# Manual Setup Intro
logger.typewriter_log(
"Create an AI-Assistant:",
Fore.GREEN,
"Enter the name of your AI and its role below. Entering nothing will load"
" defaults.",
speak_text=True,
)
# Get AI Name from User
logger.typewriter_log(
"Name your AI: ", Fore.GREEN, "For example, 'Entrepreneur-GPT'"
)
ai_name = utils.clean_input("AI Name: ")
if ai_name == "":
ai_name = "Entrepreneur-GPT"
logger.typewriter_log(
f"{ai_name} here!", Fore.LIGHTBLUE_EX, "I am at your service.", speak_text=True
)
# Get AI Role from User
logger.typewriter_log(
"Describe your AI's role: ",
Fore.GREEN,
"For example, 'an AI designed to autonomously develop and run businesses with"
" the sole goal of increasing your net worth.'",
)
ai_role = utils.clean_input(f"{ai_name} is: ")
if ai_role == "":
ai_role = "an AI designed to autonomously develop and run businesses with the"
" sole goal of increasing your net worth."
# Enter up to 5 goals for the AI
logger.typewriter_log(
"Enter up to 5 goals for your AI: ",
Fore.GREEN,
"For example: \nIncrease net worth, Grow Twitter Account, Develop and manage"
" multiple businesses autonomously'",
)
logger.info("Enter nothing to load defaults, enter nothing when finished.")
ai_goals = []
for i in range(5):
ai_goal = utils.clean_input(f"{Fore.LIGHTBLUE_EX}Goal{Style.RESET_ALL} {i+1}: ")
if ai_goal == "":
break
ai_goals.append(ai_goal)
if not ai_goals:
ai_goals = [
"Increase net worth",
"Grow Twitter Account",
"Develop and manage multiple businesses autonomously",
]
# Get API Budget from User
logger.typewriter_log(
"Enter your budget for API calls: ",
Fore.GREEN,
"For example: $1.50",
)
logger.info("Enter nothing to let the AI run without monetary limit")
api_budget_input = utils.clean_input(
f"{Fore.LIGHTBLUE_EX}Budget{Style.RESET_ALL}: $"
)
if api_budget_input == "":
api_budget = 0.0
else:
try:
api_budget = float(api_budget_input.replace("$", ""))
except ValueError:
logger.typewriter_log(
"Invalid budget input. Setting budget to unlimited.", Fore.RED
)
api_budget = 0.0
return AIConfig(ai_name, ai_role, ai_goals, api_budget)
def generate_aiconfig_automatic(user_prompt) -> AIConfig:
"""Generates an AIConfig object from the given string.
Returns:
AIConfig: The AIConfig object tailored to the user's input
"""
system_prompt = """
Your task is to devise up to 5 highly effective goals and an appropriate role-based name (_GPT) for an autonomous agent, ensuring that the goals are optimally aligned with the successful completion of its assigned task.
The user will provide the task, you will provide only the output in the exact format specified below with no explanation or conversation.
Example input:
Help me with marketing my business
Example output:
Name: CMOGPT
Description: a professional digital marketer AI that assists Solopreneurs in growing their businesses by providing world-class expertise in solving marketing problems for SaaS, content products, agencies, and more.
Goals:
- Engage in effective problem-solving, prioritization, planning, and supporting execution to address your marketing needs as your virtual Chief Marketing Officer.
- Provide specific, actionable, and concise advice to help you make informed decisions without the use of platitudes or overly wordy explanations.
- Identify and prioritize quick wins and cost-effective campaigns that maximize results with minimal time and budget investment.
- Proactively take the lead in guiding you and offering suggestions when faced with unclear information or uncertainty to ensure your marketing strategy remains on track.
"""
# Call LLM with the string as user input
messages = [
{
"role": "system",
"content": system_prompt,
},
{
"role": "user",
"content": f"Task: '{user_prompt}'\nRespond only with the output in the exact format specified in the system prompt, with no explanation or conversation.\n",
},
]
output = create_chat_completion(messages, CFG.fast_llm_model)
# Debug LLM Output
logger.debug(f"AI Config Generator Raw Output: {output}")
# Parse the output
ai_name = re.search(r"Name(?:\s*):(?:\s*)(.*)", output, re.IGNORECASE).group(1)
ai_role = (
re.search(
r"Description(?:\s*):(?:\s*)(.*?)(?:(?:\n)|Goals)",
output,
re.IGNORECASE | re.DOTALL,
)
.group(1)
.strip()
)
ai_goals = re.findall(r"(?<=\n)-\s*(.*)", output)
api_budget = 0.0 # TODO: parse api budget using a regular expression
return AIConfig(ai_name, ai_role, ai_goals, api_budget)

24
autogpt/singleton.py Normal file
View File

@@ -0,0 +1,24 @@
"""The singleton metaclass for ensuring only one instance of a class."""
import abc
class Singleton(abc.ABCMeta, type):
"""
Singleton metaclass for ensuring only one instance of a class.
"""
_instances = {}
def __call__(cls, *args, **kwargs):
"""Call method for the singleton metaclass."""
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
return cls._instances[cls]
class AbstractSingleton(abc.ABC, metaclass=Singleton):
"""
Abstract singleton class for ensuring only one instance of a class.
"""
pass

View File

@@ -0,0 +1,4 @@
"""This module contains the speech recognition and speech synthesis functions."""
from autogpt.speech.say import say_text
__all__ = ["say_text"]

50
autogpt/speech/base.py Normal file
View File

@@ -0,0 +1,50 @@
"""Base class for all voice classes."""
import abc
from threading import Lock
from autogpt.singleton import AbstractSingleton
class VoiceBase(AbstractSingleton):
"""
Base class for all voice classes.
"""
def __init__(self):
"""
Initialize the voice class.
"""
self._url = None
self._headers = None
self._api_key = None
self._voices = []
self._mutex = Lock()
self._setup()
def say(self, text: str, voice_index: int = 0) -> bool:
"""
Say the given text.
Args:
text (str): The text to say.
voice_index (int): The index of the voice to use.
"""
with self._mutex:
return self._speech(text, voice_index)
@abc.abstractmethod
def _setup(self) -> None:
"""
Setup the voices, API key, etc.
"""
pass
@abc.abstractmethod
def _speech(self, text: str, voice_index: int = 0) -> bool:
"""
Play the given text.
Args:
text (str): The text to play.
"""
pass

43
autogpt/speech/brian.py Normal file
View File

@@ -0,0 +1,43 @@
import logging
import os
import requests
from playsound import playsound
from autogpt.speech.base import VoiceBase
class BrianSpeech(VoiceBase):
"""Brian speech module for autogpt"""
def _setup(self) -> None:
"""Setup the voices, API key, etc."""
pass
def _speech(self, text: str, _: int = 0) -> bool:
"""Speak text using Brian with the streamelements API
Args:
text (str): The text to speak
Returns:
bool: True if the request was successful, False otherwise
"""
tts_url = (
f"https://api.streamelements.com/kappa/v2/speech?voice=Brian&text={text}"
)
response = requests.get(tts_url)
if response.status_code == 200:
with open("speech.mp3", "wb") as f:
f.write(response.content)
playsound("speech.mp3")
os.remove("speech.mp3")
return True
else:
logging.error(
"Request failed with status code: %s, response content: %s",
response.status_code,
response.content,
)
return False

View File

@@ -0,0 +1,88 @@
"""ElevenLabs speech module"""
import os
import requests
from playsound import playsound
from autogpt.config import Config
from autogpt.speech.base import VoiceBase
PLACEHOLDERS = {"your-voice-id"}
class ElevenLabsSpeech(VoiceBase):
"""ElevenLabs speech class"""
def _setup(self) -> None:
"""Set up the voices, API key, etc.
Returns:
None: None
"""
cfg = Config()
default_voices = ["ErXwobaYiN019PkySvjV", "EXAVITQu4vr4xnSDxMaL"]
voice_options = {
"Rachel": "21m00Tcm4TlvDq8ikWAM",
"Domi": "AZnzlk1XvdvUeBnXmlld",
"Bella": "EXAVITQu4vr4xnSDxMaL",
"Antoni": "ErXwobaYiN019PkySvjV",
"Elli": "MF3mGyEYCl7XYWbV9V6O",
"Josh": "TxGEqnHWrfWFTfGW9XjX",
"Arnold": "VR6AewLTigWG4xSOukaG",
"Adam": "pNInz6obpgDQGcFmaJgB",
"Sam": "yoZ06aMxZJJ28mfd3POQ",
}
self._headers = {
"Content-Type": "application/json",
"xi-api-key": cfg.elevenlabs_api_key,
}
self._voices = default_voices.copy()
if cfg.elevenlabs_voice_1_id in voice_options:
cfg.elevenlabs_voice_1_id = voice_options[cfg.elevenlabs_voice_1_id]
if cfg.elevenlabs_voice_2_id in voice_options:
cfg.elevenlabs_voice_2_id = voice_options[cfg.elevenlabs_voice_2_id]
self._use_custom_voice(cfg.elevenlabs_voice_1_id, 0)
self._use_custom_voice(cfg.elevenlabs_voice_2_id, 1)
def _use_custom_voice(self, voice, voice_index) -> None:
"""Use a custom voice if provided and not a placeholder
Args:
voice (str): The voice ID
voice_index (int): The voice index
Returns:
None: None
"""
# Placeholder values that should be treated as empty
if voice and voice not in PLACEHOLDERS:
self._voices[voice_index] = voice
def _speech(self, text: str, voice_index: int = 0) -> bool:
"""Speak text using elevenlabs.io's API
Args:
text (str): The text to speak
voice_index (int, optional): The voice to use. Defaults to 0.
Returns:
bool: True if the request was successful, False otherwise
"""
from autogpt.logs import logger
tts_url = (
f"https://api.elevenlabs.io/v1/text-to-speech/{self._voices[voice_index]}"
)
response = requests.post(tts_url, headers=self._headers, json={"text": text})
if response.status_code == 200:
with open("speech.mpeg", "wb") as f:
f.write(response.content)
playsound("speech.mpeg", True)
os.remove("speech.mpeg")
return True
else:
logger.warn("Request failed with status code:", response.status_code)
logger.info("Response content:", response.content)
return False

Some files were not shown because too many files have changed in this diff Show More