Some LLM providers (notably Anthropic) don't support system messages in the middle of a conversation. Changed ChatMessage.system() to ChatMessage.user() for all mid-conversation context messages across components (action history, context, skills, system clock, todo, error reporting, LATS, and multi-agent debate strategies). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
AutoGPT: An Autonomous GPT-4 Experiment
📖 Documentation | 🚀 Contributing
AutoGPT is an experimental open-source application showcasing the capabilities of modern Large Language Models. This program, driven by GPT-4, chains together LLM "thoughts", to autonomously achieve whatever goal you set. As one of the first examples of GPT-4 running fully autonomously, AutoGPT pushes the boundaries of what is possible with AI.
Demo April 16th 2023
Demo made by Blake Werlinger
🚀 Features
- 🔌 Agent Protocol (docs)
- 💻 Easy to use UI
- 🌐 Internet access for searches and information gathering
- 🧠 Powered by a mix of GPT-4 and GPT-3.5 Turbo
- 🔗 Access to popular websites and platforms
- 🗃️ File generation and editing capabilities
- 🔌 Extensibility with Plugins
Setting up AutoGPT
Prerequisites
Installation
All commands run from the classic/ directory (parent of this directory):
cd classic
poetry install
cp .env.template .env
# Edit .env with your OPENAI_API_KEY
Configuration
AutoGPT uses a layered configuration system:
1. Environment Variables (.env)
# Required
OPENAI_API_KEY=sk-...
# Optional LLM settings
SMART_LLM=gpt-4o # Model for complex reasoning
FAST_LLM=gpt-4o-mini # Model for simple tasks
# Optional search providers
TAVILY_API_KEY=tvly-...
SERPER_API_KEY=...
# Optional infrastructure
LOG_LEVEL=DEBUG
PORT=8000
FILE_STORAGE_BACKEND=local # local, s3, or gcs
2. Workspace Settings (.autogpt/autogpt.yaml)
Workspace-wide permissions for all agents:
allow:
- read_file({workspace}/**)
- write_to_file({workspace}/**)
- web_search(*)
deny:
- read_file(**.env)
- execute_shell(sudo:*)
3. Agent Settings (.autogpt/agents/{id}/permissions.yaml)
Agent-specific permission overrides.
For more configuration options, see the setup guide.
Running AutoGPT
The CLI should be self-documenting:
$ ./autogpt.sh --help
Usage: python -m autogpt [OPTIONS] COMMAND [ARGS]...
Options:
--help Show this message and exit.
Commands:
run Sets up and runs an agent, based on the task specified by the...
serve Starts an Agent Protocol compliant AutoGPT server, which creates...
When run without a sub-command, it will default to run for legacy reasons.
$ ./autogpt.sh run --help
The run sub-command starts AutoGPT with the legacy CLI interface:
$ ./autogpt.sh run --help
Usage: python -m autogpt run [OPTIONS]
Sets up and runs an agent, based on the task specified by the user, or
resumes an existing agent.
Options:
-c, --continuous Enable Continuous Mode
-y, --skip-reprompt Skips the re-prompting messages at the
beginning of the script
-l, --continuous-limit INTEGER Defines the number of times to run in
continuous mode
--speak Enable Speak Mode
--debug Enable Debug Mode
--skip-news Specifies whether to suppress the output of
latest news on startup.
--install-plugin-deps Installs external dependencies for 3rd party
plugins.
--ai-name TEXT AI name override
--ai-role TEXT AI role override
--constraint TEXT Add or override AI constraints to include in
the prompt; may be used multiple times to
pass multiple constraints
--resource TEXT Add or override AI resources to include in
the prompt; may be used multiple times to
pass multiple resources
--best-practice TEXT Add or override AI best practices to include
in the prompt; may be used multiple times to
pass multiple best practices
--override-directives If specified, --constraint, --resource and
--best-practice will override the AI's
directives instead of being appended to them
--component-config-file TEXT Path to the json configuration file.
--help Show this message and exit.
$ ./autogpt.sh serve --help
The serve sub-command starts AutoGPT wrapped in an Agent Protocol server:
$ ./autogpt.sh serve --help
Usage: python -m autogpt serve [OPTIONS]
Starts an Agent Protocol compliant AutoGPT server, which creates a custom
agent for every task.
Options:
--debug Enable Debug Mode
--install-plugin-deps Installs external dependencies for 3rd party
plugins.
--help Show this message and exit.
With serve, the application exposes an Agent Protocol compliant API and serves a frontend,
by default on http://localhost:8000.
For more comprehensive instructions, see the user guide.
Workspaces
Agents operate within a workspace - a directory containing all agent data:
{workspace}/
├── .autogpt/
│ ├── autogpt.yaml # Workspace-level permissions
│ ├── ap_server.db # Agent Protocol database (server mode)
│ └── agents/
│ └── AutoGPT-{agent_id}/
│ ├── state.json # Agent profile, directives, history
│ ├── permissions.yaml # Agent-specific permissions
│ └── workspace/ # Agent's sandboxed working directory
- Defaults to the current working directory
- Multiple agents can coexist in the same workspace
- File access is sandboxed to the agent's
workspace/subdirectory - State persists across sessions
Permissions
AutoGPT uses a layered permission system with pattern matching.
Permission Check Order (First Match Wins)
- Agent deny list → Block
- Workspace deny list → Block
- Agent allow list → Allow
- Workspace allow list → Allow
- Prompt user → Interactive approval
Pattern Syntax
Format: command_name(glob_pattern)
| Pattern | Description |
|---|---|
read_file({workspace}/**) |
Read any file in workspace |
execute_shell(python:**) |
Execute Python commands |
web_search(*) |
All web searches |
Interactive Approval Scopes
When prompted for permission:
- Once - Allow this one time only
- Agent - Always allow for this agent (saves to
permissions.yaml) - Workspace - Always allow for all agents (saves to
autogpt.yaml) - Deny - Block this command
Default Security
Denied by default:
- Sensitive files (
.env,.key,.pem) - Destructive commands (
rm -rf,sudo) - Operations outside the workspace
📚 Resources
- 📔 AutoGPT project wiki
- 🧮 AutoGPT project kanban
- 🌃 AutoGPT roadmap
⚠️ Limitations
This experiment aims to showcase the potential of GPT-4 but comes with some limitations:
- Not a polished application or product, just an experiment
- May not perform well in complex, real-world business scenarios. In fact, if it actually does, please share your results!
- Quite expensive to run, so set and monitor your API key limits with OpenAI!
🛡 Disclaimer
This project, AutoGPT, is an experimental application and is provided "as-is" without any warranty, express or implied. By using this software, you agree to assume all risks associated with its use, including but not limited to data loss, system failure, or any other issues that may arise.
The developers and contributors of this project do not accept any responsibility or liability for any losses, damages, or other consequences that may occur as a result of using this software. You are solely responsible for any decisions and actions taken based on the information provided by AutoGPT.
Please note that the use of the GPT-4 language model can be expensive due to its token usage. By utilizing this project, you acknowledge that you are responsible for monitoring and managing your own token usage and the associated costs. It is highly recommended to check your OpenAI API usage regularly and set up any necessary limits or alerts to prevent unexpected charges.
As an autonomous experiment, AutoGPT may generate content or take actions that are not in line with real-world business practices or legal requirements. It is your responsibility to ensure that any actions or decisions made based on the output of this software comply with all applicable laws, regulations, and ethical standards. The developers and contributors of this project shall not be held responsible for any consequences arising from the use of this software.
By using AutoGPT, you agree to indemnify, defend, and hold harmless the developers, contributors, and any affiliated parties from and against any and all claims, damages, losses, liabilities, costs, and expenses (including reasonable attorneys' fees) arising from your use of this software or your violation of these terms.
In Q2 of 2023, AutoGPT became the fastest growing open-source project in history. Now that the dust has settled, we're committed to continued sustainable development and growth of the project.