## Task The SmartDecisionMakerBlock is a specialized block in a graph-based system that leverages a language model (LLM) to make intelligent decisions about which tools or functions to invoke based on a user-provided prompt. It is designed to process input data, interact with a language model, and dynamically determine the appropriate tools to call from a set of available options, making it a powerful component for AI-driven workflows. ## How It Works in Practice - **Scenario:** Imagine a workflow where a user inputs, "Send an email to John about the meeting." The SmartDecisionMakerBlock is connected to tools like send_email, schedule_meeting, and search_contacts. - **Execution:** 1. The block receives the prompt and system instructions (e.g., "Choose a function to call"). 2.It identifies the available tools from the graph and constructs their signatures (e.g., send_email(recipient, subject, body)). 3. The LLM analyzes the prompt and decides to call send_email with arguments like recipient: "John", subject: "Meeting", body: "Let’s discuss...". 4. The block yields these tool-specific outputs, which can be picked up by downstream nodes to execute the email-sending action. ## Changes 🏗️ - Add the Smart Decision Maker (SDM) block. - Break circular imports in integration code.  ## Work in Progress ⚠️ **Important note this is a temporary UX for the system - UX will be addressed in a future PR** ⚠️ ### Current Status I’m currently focused on the smart decision logic. The main additions in the ongoing PR include: - Defining function signatures for OpenAI function-calling schemas based on node links and the linked blocks. - Adding tests for function signature generation. - Force all tool calls to be made via an agent. (Need to uncomment) - Restrict each tool call entry to a single node. - simplify the output emission process, to emit each parameter one at a time. - Change test to use agents and hardcode output how I think it should work to test it does actually work - Hook up openai, in a simplified way, to test the function calling (mock for testing) - Once all the above is working, use credentials system and build of llm.py ### What’s Next - Review Process ### Reviewers Phase 1 This PR is now ready for review, during the first phase of reviews I'm looking for comments on approach and logic. Out of scope: code style and organization at this stage ### Reviewers Phase 2 Once we are all happy with the approach and logic. We can open the review process to general code quality and nits, to be considered. --------- Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
AutoGPT Platform
Welcome to the AutoGPT Platform - a powerful system for creating and running AI agents to solve business problems. This platform enables you to harness the power of artificial intelligence to automate tasks, analyze data, and generate insights for your organization.
Getting Started
Prerequisites
- Docker
- Docker Compose V2 (comes with Docker Desktop, or can be installed separately)
- Node.js & NPM (for running the frontend application)
Running the System
To run the AutoGPT Platform, follow these steps:
-
Clone this repository to your local machine and navigate to the
autogpt_platformdirectory within the repository:git clone <https://github.com/Significant-Gravitas/AutoGPT.git | git@github.com:Significant-Gravitas/AutoGPT.git> cd AutoGPT/autogpt_platform -
Run the following command:
git submodule update --init --recursive --progressThis command will initialize and update the submodules in the repository. The
supabasefolder will be cloned to the root directory. -
Run the following command:
cp supabase/docker/.env.example .envThis command will copy the
.env.examplefile to.envin thesupabase/dockerdirectory. You can modify the.envfile to add your own environment variables. -
Run the following command:
docker compose up -dThis command will start all the necessary backend services defined in the
docker-compose.ymlfile in detached mode. -
Navigate to
frontendwithin theautogpt_platformdirectory:cd frontendYou will need to run your frontend application separately on your local machine.
-
Run the following command:
cp .env.example .env.localThis command will copy the
.env.examplefile to.env.localin thefrontenddirectory. You can modify the.env.localwithin this folder to add your own environment variables for the frontend application. -
Run the following command:
npm install npm run devThis command will install the necessary dependencies and start the frontend application in development mode. If you are using Yarn, you can run the following commands instead:
yarn install && yarn dev -
Open your browser and navigate to
http://localhost:3000to access the AutoGPT Platform frontend.
Docker Compose Commands
Here are some useful Docker Compose commands for managing your AutoGPT Platform:
docker compose up -d: Start the services in detached mode.docker compose stop: Stop the running services without removing them.docker compose rm: Remove stopped service containers.docker compose build: Build or rebuild services.docker compose down: Stop and remove containers, networks, and volumes.docker compose watch: Watch for changes in your services and automatically update them.
Sample Scenarios
Here are some common scenarios where you might use multiple Docker Compose commands:
-
Updating and restarting a specific service:
docker compose build api_srv docker compose up -d --no-deps api_srvThis rebuilds the
api_srvservice and restarts it without affecting other services. -
Viewing logs for troubleshooting:
docker compose logs -f api_srv ws_srvThis shows and follows the logs for both
api_srvandws_srvservices. -
Scaling a service for increased load:
docker compose up -d --scale executor=3This scales the
executorservice to 3 instances to handle increased load. -
Stopping the entire system for maintenance:
docker compose stop docker compose rm -f docker compose pull docker compose up -dThis stops all services, removes containers, pulls the latest images, and restarts the system.
-
Developing with live updates:
docker compose watchThis watches for changes in your code and automatically updates the relevant services.
-
Checking the status of services:
docker compose psThis shows the current status of all services defined in your docker-compose.yml file.
These scenarios demonstrate how to use Docker Compose commands in combination to manage your AutoGPT Platform effectively.
Persisting Data
To persist data for PostgreSQL and Redis, you can modify the docker-compose.yml file to add volumes. Here's how:
-
Open the
docker-compose.ymlfile in a text editor. -
Add volume configurations for PostgreSQL and Redis services:
services: postgres: # ... other configurations ... volumes: - postgres_data:/var/lib/postgresql/data redis: # ... other configurations ... volumes: - redis_data:/data volumes: postgres_data: redis_data: -
Save the file and run
docker compose up -dto apply the changes.
This configuration will create named volumes for PostgreSQL and Redis, ensuring that your data persists across container restarts.