Files
AutoGPT/autogpt_platform/backend/test
Swifty b603ed2d2a feature(platform) Smart Decision Maker Block (#9490)
## Task

The SmartDecisionMakerBlock is a specialized block in a graph-based
system that leverages a language model (LLM) to make intelligent
decisions about which tools or functions to invoke based on a
user-provided prompt. It is designed to process input data, interact
with a language model, and dynamically determine the appropriate tools
to call from a set of available options, making it a powerful component
for AI-driven workflows.

## How It Works in Practice

- **Scenario:** Imagine a workflow where a user inputs, "Send an email
to John about the meeting." The SmartDecisionMakerBlock is connected to
tools like send_email, schedule_meeting, and search_contacts.
- **Execution:**
1. The block receives the prompt and system instructions (e.g., "Choose
a function to call").
2.It identifies the available tools from the graph and constructs their
signatures (e.g., send_email(recipient, subject, body)).
3. The LLM analyzes the prompt and decides to call send_email with
arguments like recipient: "John", subject: "Meeting", body: "Let’s
discuss...".
4. The block yields these tool-specific outputs, which can be picked up
by downstream nodes to execute the email-sending action.


## Changes 🏗️
- Add the Smart Decision Maker (SDM) block.
- Break circular imports in integration code.

![Screenshot 2025-02-21 at 10 23
25](https://github.com/user-attachments/assets/6fbfd875-fb1b-4d77-8051-a214c3c86082)


## Work in Progress

⚠️ **Important note this is a temporary UX for the system - UX will be
addressed in a future PR** ⚠️

### Current Status

I’m currently focused on the smart decision logic. The main additions in
the ongoing PR include:
- Defining function signatures for OpenAI function-calling schemas based
on node links and the linked blocks.
- Adding tests for function signature generation.
- Force all tool calls to be made via an agent. (Need to uncomment)
- Restrict each tool call entry to a single node.
- simplify the output emission process, to emit each parameter one at a
time.
- Change test to use agents and hardcode output how I think it should
work to test it does actually work
- Hook up openai, in a simplified way, to test the function calling
(mock for testing)
- Once all the above is working, use credentials system and build of
llm.py



### What’s Next

- Review Process

### Reviewers Phase 1

This PR is now ready for review, during the first phase of reviews I'm
looking for comments on approach and logic.

Out of scope:  code style and organization at this stage

### Reviewers Phase 2

Once we are all happy with the approach and logic. We can open the
review process to general code quality and nits, to be considered.

---------

Co-authored-by: Zamil Majdy <zamil.majdy@agpt.co>
2025-02-25 15:29:22 +01:00
..