Files
AutoGPT/classic
Nicholas Tindle 864c5a7846 fix(classic): approve+feedback now executes command then sends feedback
Previously, when a user selected "Once" or "Always" with feedback (via Tab),
the command was NOT executed because UserFeedbackProvided was raised before
checking the approval scope. This fix changes the architecture from
exception-based to return-value-based.

Changes:
- Add PermissionCheckResult class with allowed, scope, and feedback fields
- Change check_command() to return PermissionCheckResult instead of bool
- Update prompt_fn signature to return (ApprovalScope, feedback) tuple
- Add pending_user_feedback mechanism to EpisodicActionHistory
- Update execute() to handle feedback after successful command execution
- Feedback message explicitly states "Command executed successfully"
- Add on_auto_approve callback for displaying auto-approved commands
- Add comprehensive tests for approval/denial with feedback scenarios

Behavior:
- Once + feedback → Execute command, then send feedback to agent
- Always + feedback → Execute command, save permission, send feedback
- Deny + feedback → Don't execute, send feedback to agent

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-18 22:32:43 -06:00
..

AutoGPT Classic

AutoGPT Classic was an experimental project to demonstrate autonomous GPT-4 operation. It was designed to make GPT-4 independently operate and chain together tasks to achieve more complex goals.

Project Status

This project is unsupported, and dependencies will not be updated. It was an experiment that has concluded its initial research phase. If you want to use AutoGPT, you should use the AutoGPT Platform.

For those interested in autonomous AI agents, we recommend exploring more actively maintained alternatives or referring to this codebase for educational purposes only.

Overview

AutoGPT Classic was one of the first implementations of autonomous AI agents - AI systems that can independently:

  • Break down complex goals into smaller tasks
  • Execute those tasks using available tools and APIs
  • Learn from the results and adjust its approach
  • Chain multiple actions together to achieve an objective

Structure

  • /benchmark - Performance testing tools
  • /forge - Core autonomous agent framework
  • /original_autogpt - Original implementation

Getting Started

Prerequisites

Installation

# Clone the repository
git clone https://github.com/Significant-Gravitas/AutoGPT.git
cd classic

# Install forge (core library)
cd forge && poetry install

# Or install original_autogpt (includes forge as dependency)
cd original_autogpt && poetry install

# Install benchmark (optional)
cd benchmark && poetry install

Configuration

Copy the example environment file and add your API keys:

cp .env.example .env
# Edit .env with your OPENAI_API_KEY, etc.

Running

# Run forge agent
cd forge && poetry run python -m forge

# Run original autogpt server
cd original_autogpt && poetry run serve --debug

# Run autogpt CLI
cd original_autogpt && poetry run autogpt

Agents run on http://localhost:8000 by default.

Benchmarking

cd benchmark && poetry run agbenchmark

Testing

cd forge && poetry run pytest
cd original_autogpt && poetry run pytest

Security Notice

This codebase has known vulnerabilities and issues with its dependencies. It will not be updated to new dependencies. Use for educational purposes only.

License

This project segment is licensed under the MIT License - see the LICENSE file for details.

Documentation

Please refer to the documentation for more detailed information about the project's architecture and concepts.