Use GPT-4 in Agent loop by default (#4899)

* Use GPT-4 as default smart LLM in Agent

* Rename (smart|fast)_llm_model to (smart|fast)_llm everywhere

* Fix test_config.py::test_initial_values

* Fix test_config.py::test_azure_config

* Fix Azure config backwards compatibility
This commit is contained in:
Reinier van der Leer
2023-07-07 03:42:18 +02:00
committed by GitHub
parent ac17518663
commit bde007e6f7
16 changed files with 109 additions and 112 deletions

View File

@@ -16,7 +16,7 @@ Configuration is controlled through the `Config` object. You can set configurati
- `EMBEDDING_MODEL`: LLM Model to use for embedding tasks. Default: text-embedding-ada-002
- `EXECUTE_LOCAL_COMMANDS`: If shell commands should be executed locally. Default: False
- `EXIT_KEY`: Exit key accepted to exit. Default: n
- `FAST_LLM_MODEL`: LLM Model to use for most tasks. Default: gpt-3.5-turbo
- `FAST_LLM`: LLM Model to use for most tasks. Default: gpt-3.5-turbo
- `GITHUB_API_KEY`: [Github API Key](https://github.com/settings/tokens). Optional.
- `GITHUB_USERNAME`: GitHub Username. Optional.
- `GOOGLE_API_KEY`: Google API key. Optional.
@@ -43,7 +43,7 @@ Configuration is controlled through the `Config` object. You can set configurati
- `SHELL_ALLOWLIST`: List of shell commands that ARE allowed to be executed by Auto-GPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `allowlist`. Default: None
- `SHELL_COMMAND_CONTROL`: Whether to use `allowlist` or `denylist` to determine what shell commands can be executed (Default: denylist)
- `SHELL_DENYLIST`: List of shell commands that ARE NOT allowed to be executed by Auto-GPT. Only applies if `SHELL_COMMAND_CONTROL` is set to `denylist`. Default: sudo,su
- `SMART_LLM_MODEL`: LLM Model to use for "smart" tasks. Default: gpt-3.5-turbo
- `SMART_LLM`: LLM Model to use for "smart" tasks. Default: gpt-4
- `STREAMELEMENTS_VOICE`: StreamElements voice to use. Default: Brian
- `TEMPERATURE`: Value of temperature given to OpenAI. Value from 0 to 2. Lower is more deterministic, higher is more random. See https://platform.openai.com/docs/api-reference/completions/create#completions/create-temperature
- `TEXT_TO_SPEECH_PROVIDER`: Text to Speech Provider. Options are `gtts`, `macos`, `elevenlabs`, and `streamelements`. Default: gtts

View File

@@ -133,8 +133,8 @@ Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](htt
make an Azure configuration file:
- Rename `azure.yaml.template` to `azure.yaml` and provide the relevant `azure_api_base`, `azure_api_version` and all the deployment IDs for the relevant models in the `azure_model_map` section:
- `fast_llm_model_deployment_id`: your gpt-3.5-turbo or gpt-4 deployment ID
- `smart_llm_model_deployment_id`: your gpt-4 deployment ID
- `fast_llm_deployment_id`: your gpt-3.5-turbo or gpt-4 deployment ID
- `smart_llm_deployment_id`: your gpt-4 deployment ID
- `embedding_model_deployment_id`: your text-embedding-ada-002 v2 deployment ID
Example:
@@ -143,7 +143,7 @@ Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](htt
# Please specify all of these values as double-quoted strings
# Replace string in angled brackets (<>) to your own deployment Name
azure_model_map:
fast_llm_model_deployment_id: "<auto-gpt-deployment>"
fast_llm_deployment_id: "<auto-gpt-deployment>"
...
Details can be found in the [openai-python docs], and in the [Azure OpenAI docs] for the embedding model.

View File

@@ -72,7 +72,7 @@ If you don't have access to GPT-4, this mode allows you to use Auto-GPT!
./run.sh --gpt3only
```
You can achieve the same by setting `SMART_LLM_MODEL` in `.env` to `gpt-3.5-turbo`.
You can achieve the same by setting `SMART_LLM` in `.env` to `gpt-3.5-turbo`.
### GPT-4 ONLY Mode