[doc] Improvements: Tutorials and Explanation (#1603)

* [doc] Modified README: detailed explanation, cleared conceptual confusions, added explicit examples

* [doc] Modified README: emphasize precedence note, concise description

* [doc] Modified README: fixed CMD to project directory
This commit is contained in:
Ding3LI
2023-04-15 13:21:39 -05:00
committed by GitHub
parent 1ce6419698
commit 9f822ec5ca

185
README.md
View File

@@ -1,8 +1,10 @@
# Auto-GPT: An Autonomous GPT-4 Experiment
### 🔴 🔴 🔴 Urgent: USE `stable` not `master` 🔴 🔴 🔴
We've improved our workflow. `master` will often be **in a broken state**.
Download the latest `stable` release here: https://github.com/Torantulino/Auto-GPT/releases/latest
This information SUPERCEDES any following information. Takes precedence. Do this not that.
> This information takes precedence over any other information provided and should be followed accordingly.
Our workflow has been improved, but please note that `master` branch may often in a **broken** state.
Please download the latest `stable` release from here: https://github.com/Torantulino/Auto-GPT/releases/latest.
![GitHub Repo stars](https://img.shields.io/github/stars/Torantulino/auto-gpt?style=social)
[![Twitter Follow](https://img.shields.io/twitter/follow/siggravitas?style=social)](https://twitter.com/SigGravitas)
@@ -91,59 +93,65 @@ Optional:
To install Auto-GPT, follow these steps:
1. Make sure you have all the **requirements** above, if not, install/get them.
1. Make sure you have all the **requirements** listed above, if not, install/get them
_The following commands should be executed in a CMD, Bash or Powershell window. To do this, go to a folder on your computer, click in the folder path at the top and type CMD, then press enter._
_To execute the following commands, open a CMD, Bash, or Powershell window by navigating to a folder on your computer and typing `CMD` in the folder path at the top, then press enter._
2. Clone the repository:
For this step you need Git installed, but you can just download the zip file instead by clicking the button at the top of this page ☝️
2. Clone the repository: For this step, you need Git installed. Alternatively, you can download the zip file by clicking the button at the top of this page ☝️
```
```bash
git clone https://github.com/Torantulino/Auto-GPT.git
```
3. Navigate to the project directory:
_(Type this into your CMD window, you're aiming to navigate the CMD window to the repository you just downloaded)_
3. Navigate to the directory where the repository was downloaded
```
cd 'Auto-GPT'
```bash
cd Auto-GPT
```
4. Install the required dependencies:
_(Again, type this into your CMD window)_
4. Install the required dependencies
```
```bash
pip install -r requirements.txt
```
5. Rename `.env.template` to `.env` and fill in your `OPENAI_API_KEY`. If you plan to use Speech Mode, fill in your `ELEVEN_LABS_API_KEY` as well.
- See [OpenAI API Keys Configuration](#openai-api-keys-configuration) to obtain your OpenAI API key.
- Obtain your ElevenLabs API key from: https://elevenlabs.io. You can view your xi-api-key using the "Profile" tab on the website.
- If you want to use GPT on an Azure instance, set `USE_AZURE` to `True` and then:
- Rename `azure.yaml.template` to `azure.yaml` and provide the relevant `azure_api_base`, `azure_api_version` and all of the deployment ids for the relevant models in the `azure_model_map` section:
- `fast_llm_model_deployment_id` - your gpt-3.5-turbo or gpt-4 deployment id
- `smart_llm_model_deployment_id` - your gpt-4 deployment id
- `embedding_model_deployment_id` - your text-embedding-ada-002 v2 deployment id
- Please specify all of these values as double quoted strings
- details can be found here: https://pypi.org/project/openai/ in the `Microsoft Azure Endpoints` section and here: https://learn.microsoft.com/en-us/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line for the embedding model.
- If you want to use GPT on an Azure instance, set `USE_AZURE` to `True` and then follow these steps:
- Rename `azure.yaml.template` to `azure.yaml` and provide the relevant `azure_api_base`, `azure_api_version` and all the deployment IDs for the relevant models in the `azure_model_map` section:
- `fast_llm_model_deployment_id` - your gpt-3.5-turbo or gpt-4 deployment ID
- `smart_llm_model_deployment_id` - your gpt-4 deployment ID
- `embedding_model_deployment_id` - your text-embedding-ada-002 v2 deployment ID
- Please specify all of these values as double-quoted strings
> Replace string in angled brackets (<>) to your own ID
```yaml
azure_model_map:
fast_llm_model_deployment_id: "<my-fast-llm-deployment-id>"
...
```
- Details can be found here: https://pypi.org/project/openai/ in the `Microsoft Azure Endpoints` section and here: https://learn.microsoft.com/en-us/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line for the embedding model.
## 🔧 Usage
1. Run the `autogpt` Python module in your terminal:
_(Type this into your CMD window)_
1. Run `autogpt` Python module in your terminal
```
python -m autogpt
```
2. After each of action, enter 'y' to authorise command, 'y -N' to run N continuous commands, 'n' to exit program, or enter additional feedback for the AI.
2. After each action, choose from options to authorize command(s),
exit the program, or provide feedback to the AI.
1. Authorize a single command, enter `y`
2. Authorize a series of _N_ continuous commands, enter `y -N`
3. Exit the program, enter `n`
### Logs
You will find activity and error logs in the folder `./output/logs`
Activity and error logs are located in the `./output/logs`
To output debug logs:
To print out debug logs:
```
python -m autogpt --debug
@@ -165,17 +173,26 @@ docker run -it --env-file=./.env -v $PWD/auto_gpt_workspace:/app/auto_gpt_worksp
### Command Line Arguments
Here are some common arguments you can use when running Auto-GPT:
> Replace anything in angled brackets (<>) to a value you want to specify
* `python scripts/main.py --help` to see a list of all available command line arguments.
* `python scripts/main.py --ai-settings <filename>` to run Auto-GPT with a different AI Settings file.
* `python scripts/main.py --use-memory <memory-backend>` to specify one of 3 memory backends: `local`, `redis`, `pinecone` or 'no_memory'.
* View all available command line arguments
```bash
python scripts/main.py --help
```
* Run Auto-GPT with a different AI Settings file
```bash
python scripts/main.py --ai-settings <filename>
```
* Specify one of 3 memory backends: `local`, `redis`, `pinecone` or `no_memory`
```bash
python scripts/main.py --use-memory <memory-backend>
```
> **NOTE**: There are shorthands for some of these flags, for example `-m` for `--use-memory`. Use `python scripts/main.py --help` for more information
## 🗣️ Speech Mode
Use this to use TTS for Auto-GPT
Use this to use TTS _(Text-to-Speech)_ for Auto-GPT
```
```bash
python -m autogpt --speak
```
@@ -212,59 +229,47 @@ _Remember that your free daily custom search quota allows only up to 100 searche
For Windows Users:
```
```bash
setx GOOGLE_API_KEY "YOUR_GOOGLE_API_KEY"
setx CUSTOM_SEARCH_ENGINE_ID "YOUR_CUSTOM_SEARCH_ENGINE_ID"
```
For macOS and Linux users:
```
```bash
export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY"
export CUSTOM_SEARCH_ENGINE_ID="YOUR_CUSTOM_SEARCH_ENGINE_ID"
```
## Memory Backend Setup
Setup any one backend to persist memory.
### Redis Setup
Install docker desktop.
Run:
```
## Redis Setup
> _**CAUTION**_ \
This is not intended to be publicly accessible and lacks security measures. Therefore, avoid exposing Redis to the internet without a password or at all
1. Install docker desktop
```bash
docker run -d --name redis-stack-server -p 6379:6379 redis/redis-stack-server:latest
```
> See https://hub.docker.com/r/redis/redis-stack-server for setting a password and additional configuration.
See https://hub.docker.com/r/redis/redis-stack-server for setting a password and additional configuration.
Set the following environment variables:
```
2. Set the following environment variables
> Replace **PASSWORD** in angled brackets (<>)
```bash
MEMORY_BACKEND=redis
REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=
REDIS_PASSWORD=<PASSWORD>
```
Note that this is not intended to be run facing the internet and is not secure, do not expose redis to the internet without a password or at all really.
You can optionally set
```
```bash
WIPE_REDIS_ON_START=False
```
To persist memory stored in Redis.
To persist memory stored in Redis
You can specify the memory index for redis using the following:
```
MEMORY_INDEX=whatever
```bash
MEMORY_INDEX=<WHATEVER>
```
### 🌲 Pinecone API Key Setup
@@ -291,30 +296,30 @@ Pinecone enables the storage of vast amounts of vector-based memory, allowing fo
In the `.env` file set:
- `PINECONE_API_KEY`
- `PINECONE_ENV` (something like: us-east4-gcp)
- `PINECONE_ENV` (example: _"us-east4-gcp"_)
- `MEMORY_BACKEND=pinecone`
Alternatively, you can set them from the command line (advanced):
For Windows Users:
```
setx PINECONE_API_KEY "YOUR_PINECONE_API_KEY"
setx PINECONE_ENV "Your pinecone region" # something like: us-east4-gcp
```bash
setx PINECONE_API_KEY "<YOUR_PINECONE_API_KEY>"
setx PINECONE_ENV "<YOUR_PINECONE_REGION>" # e.g: "us-east4-gcp"
setx MEMORY_BACKEND "pinecone"
```
For macOS and Linux users:
```
export PINECONE_API_KEY="YOUR_PINECONE_API_KEY"
export PINECONE_ENV="Your pinecone region" # something like: us-east4-gcp
```bash
export PINECONE_API_KEY="<YOUR_PINECONE_API_KEY>"
export PINECONE_ENV="<YOUR_PINECONE_REGION>" # e.g: "us-east4-gcp"
export MEMORY_BACKEND="pinecone"
```
## Setting Your Cache Type
By default Auto-GPT is going to use LocalCache instead of redis or Pinecone.
By default, Auto-GPT is going to use LocalCache instead of redis or Pinecone.
To switch to either, change the `MEMORY_BACKEND` env variable to the value that you want:
@@ -329,7 +334,7 @@ To switch to either, change the `MEMORY_BACKEND` env variable to the value that
## 🧠 Memory pre-seeding
```
```bash
# python scripts/data_ingestion.py -h
usage: data_ingestion.py [-h] (--file FILE | --dir DIR) [--init] [--overlap OVERLAP] [--max_length MAX_LENGTH]
@@ -346,7 +351,7 @@ options:
# python scripts/data_ingestion.py --dir seed_data --init --overlap 200 --max_length 1000
```
This script located at scripts/data_ingestion.py, allows you to ingest files into memory and pre-seed it before running Auto-GPT.
This script located at `scripts/data_ingestion.py`, allows you to ingest files into memory and pre-seed it before running Auto-GPT.
Memory pre-seeding is a technique that involves ingesting relevant documents or data into the AI's memory so that it can use this information to generate more informed and accurate responses.
@@ -355,35 +360,34 @@ To pre-seed the memory, the content of each document is split into chunks of a s
This technique is particularly useful when working with large amounts of data or when there is specific information that the AI needs to be able to access quickly.
By pre-seeding the memory, the AI can retrieve and use this information more efficiently, saving time, API call and improving the accuracy of its responses.
You could for example download the documentation of an API, a Github repository, etc. and ingest it into memory before running Auto-GPT.
You could for example download the documentation of an API, a GitHub repository, etc. and ingest it into memory before running Auto-GPT.
⚠️ If you use Redis as your memory, make sure to run Auto-GPT with the WIPE_REDIS_ON_START set to False in your .env file.
⚠️ If you use Redis as your memory, make sure to run Auto-GPT with the `WIPE_REDIS_ON_START` set to `False` in your `.env` file.
For other memory backend, we currently forcefully wipe the memory when starting Auto-GPT. To ingest data with those memory backend, you can call the data_ingestion.py script anytime during an Auto-GPT run.
For other memory backend, we currently forcefully wipe the memory when starting Auto-GPT. To ingest data with those memory backend, you can call the `data_ingestion.py` script anytime during an Auto-GPT run.
Memories will be available to the AI immediately as they are ingested, even if ingested while Auto-GPT is running.
In the example above, the script initializes the memory, ingests all files within the seed_data directory into memory with an overlap between chunks of 200 and a maximum length of each chunk of 4000.
Note that you can also use the --file argument to ingest a single file into memory and that the script will only ingest files within the auto_gpt_workspace directory.
In the example above, the script initializes the memory, ingests all files within the `/seed_data` directory into memory with an overlap between chunks of 200 and a maximum length of each chunk of 4000.
Note that you can also use the `--file` argument to ingest a single file into memory and that the script will only ingest files within the `/auto_gpt_workspace` directory.
You can adjust the max_length and overlap parameters to fine-tune the way the documents are presented to the AI when it "recall" that memory:
You can adjust the `max_length` and overlap parameters to fine-tune the way the docuents are presented to the AI when it "recall" that memory:
- Adjusting the overlap value allows the AI to access more contextual information from each chunk when recalling information, but will result in more chunks being created and therefore increase memory backend usage and OpenAI API requests.
- Reducing the max_length value will create more chunks, which can save prompt tokens by allowing for more message history in the context, but will also increase the number of chunks.
- Increasing the max_length value will provide the AI with more contextual information from each chunk, reducing the number of chunks created and saving on OpenAI API requests. However, this may also use more prompt tokens and decrease the overall context available to the AI.
- Reducing the `max_length` value will create more chunks, which can save prompt tokens by allowing for more message history in the context, but will also increase the number of chunks.
- Increasing the `max_length` value will provide the AI with more contextual information from each chunk, reducing the number of chunks created and saving on OpenAI API requests. However, this may also use more prompt tokens and decrease the overall context available to the AI.
## 💀 Continuous Mode ⚠️
Run the AI **without** user authorisation, 100% automated.
Continuous mode is not recommended.
It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorise.
Run the AI **without** user authorization, 100% automated.
Continuous mode is NOT recommended.
It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorize.
Use at your own risk.
1. Run the `autogpt` python module in your terminal:
```
```bash
python -m autogpt --speak --continuous
```
2. To exit the program, press Ctrl + C
@@ -392,7 +396,7 @@ python -m autogpt --speak --continuous
If you don't have access to the GPT4 api, this mode will allow you to use Auto-GPT!
```
```bash
python -m autogpt --speak --gpt3only
```
@@ -404,15 +408,16 @@ By default, Auto-GPT uses DALL-e for image generation. To use Stable Diffusion,
Once you have a token, set these variables in your `.env`:
```
```bash
IMAGE_PROVIDER=sd
HUGGINGFACE_API_TOKEN="YOUR_HUGGINGFACE_API_TOKEN"
```
## Selenium
```bash
sudo Xvfb :10 -ac -screen 0 1024x768x24 & DISPLAY=:10 <YOUR_CLIENT>
```
sudo Xvfb :10 -ac -screen 0 1024x768x24 &
DISPLAY=:10 your-client
## ⚠️ Limitations
This experiment aims to showcase the potential of GPT-4 but comes with some limitations:
@@ -453,13 +458,13 @@ We look forward to connecting with you and hearing your thoughts, ideas, and exp
To run tests, run the following command:
```
```bash
python -m unittest discover tests
```
To run tests and see coverage, run the following command:
```
```bash
coverage run -m unittest discover tests
```
@@ -469,7 +474,7 @@ This project uses [flake8](https://flake8.pycqa.org/en/latest/) for linting. We
To run the linter, run the following command:
```
```bash
flake8 autogpt/ tests/
# Or, if you want to run flake8 with the same configuration as the CI: