diff --git a/python/README.md b/python/README.md index db4573ee73..c344aa48ca 100644 --- a/python/README.md +++ b/python/README.md @@ -3,14 +3,13 @@ - [Documentation](http://microsoft.github.io/agnext) - [Examples](https://github.com/microsoft/agnext/tree/main/python/examples) - ## Package layering - `core` are the the foundational generic interfaces upon which all else is built. This module must not depend on any other module. -- `components` are the building blocks for creating single agents -- `application` are implementations of core components that are used to compose an application -- `chat` is the concrete implementation of multi-agent interactions. Most users will deal with this module. - +- `application` are implementations of core components that are used to compose an application. +- `components` are the building blocks for creating agents. +- `chat` are concrete implementations of agents and multi-agent interactions. + It is used for creating demos and experimenting with multi-agent design patterns. ## Development @@ -27,6 +26,7 @@ hatch run check ### Virtual environment To get a shell with the package available (virtual environment) run: + ```sh hatch shell ``` diff --git a/python/docs/src/core-concepts/memory.md b/python/docs/src/core-concepts/memory.md index 0ee3ac8046..3801be08fc 100644 --- a/python/docs/src/core-concepts/memory.md +++ b/python/docs/src/core-concepts/memory.md @@ -12,7 +12,7 @@ Built-in memory implementations are: - {py:class}`agnext.chat.memory.HeadAndTailChatMemory` To create a custom memory implementation, you need to subclass the -{py:class}`agnext.chat.memory.ChatMemory` protocol class and implement +{py:class}`agnext.components.memory.ChatMemory` protocol class and implement all its methods. For example, you can use [LLMLingua](https://github.com/microsoft/LLMLingua) to create a custom memory implementation that provides a compressed diff --git a/python/docs/src/guides/group-chat-coder-reviewer.md b/python/docs/src/guides/group-chat-coder-reviewer.md index e59cd63c8d..5828c3cee6 100644 --- a/python/docs/src/guides/group-chat-coder-reviewer.md +++ b/python/docs/src/guides/group-chat-coder-reviewer.md @@ -34,54 +34,62 @@ Next, let's create the runtime: runtime = SingleThreadedAgentRuntime() ``` -Now, let's create the participant agents using the +Now, let's register the participant agents using the {py:class}`agnext.chat.agents.ChatCompletionAgent` class. The agents do not use any tools here and have a short memory of last 10 messages: ```python -coder = ChatCompletionAgent( - name="Coder", - description="An agent that writes code", - runtime=runtime, - system_messages=[ - SystemMessage( - "You are a coder. You can write code to solve problems.\n" - "Work with the reviewer to improve your code." - ) - ], - model_client=OpenAI(model="gpt-4-turbo"), - memory=BufferedChatMemory(buffer_size=10), +coder = runtime.register_and_get_proxy( + "Coder", + lambda: ChatCompletionAgent( + description="An agent that writes code", + system_messages=[ + SystemMessage( + "You are a coder. You can write code to solve problems.\n" + "Work with the reviewer to improve your code." + ) + ], + model_client=OpenAI(model="gpt-4-turbo"), + memory=BufferedChatMemory(buffer_size=10), + ), ) -reviewer = ChatCompletionAgent( - name="Reviewer", - description="An agent that reviews code", - runtime=runtime, - system_messages=[ - SystemMessage( - "You are a code reviewer. You focus on correctness, efficiency and safety of the code.\n" - "Provide reviews only.\n" - "Output only 'APPROVE' to approve the code and end the conversation." - ) - ], - model_client=OpenAI(model="gpt-4-turbo"), - memory=BufferedChatMemory(buffer_size=10), +reviewer = runtime.register_and_get_proxy( + "Reviewer", + lambda: ChatCompletionAgent( + description="An agent that reviews code", + system_messages=[ + SystemMessage( + "You are a code reviewer. You focus on correctness, efficiency and safety of the code.\n" + "Respond using the following format:\n" + "Code Review:\n" + "Correctness: \n" + "Efficiency: \n" + "Safety: \n" + "Approval: \n" + "Suggested Changes: " + ) + ], + model_client=OpenAI(model="gpt-4-turbo"), + memory=BufferedChatMemory(buffer_size=10), + ), ) ``` -Let's create the Group Chat Manager agent +Let's register the Group Chat Manager agent ({py:class}`agnext.chat.patterns.GroupChatManager`) that orchestrates the conversation. ```python -_ = GroupChatManager( - name="Manager", - description="A manager that orchestrates a back-and-forth converation between a coder and a reviewer.", - runtime=runtime, - participants=[coder, reviewer], # The order of the participants indicates the order of speaking. - memory=BufferedChatMemory(buffer_size=10), - termination_word="APPROVE", - on_message_received=lambda message: print(f"{'-'*80}\n{message.source}: {message.content}"), +runtime.register( + "Manager", + lambda: GroupChatManager( + description="A manager that orchestrates a back-and-forth converation between a coder and a reviewer.", + runtime=runtime, + participants=[coder.id, reviewer.id], # The order of the participants indicates the order of speaking. + memory=BufferedChatMemory(buffer_size=10), + termination_word="APPROVE", + ), ) ``` diff --git a/python/docs/src/index.rst b/python/docs/src/index.rst index 0b5115f8d1..73c142589e 100644 --- a/python/docs/src/index.rst +++ b/python/docs/src/index.rst @@ -11,6 +11,20 @@ communication between agents, allowing for a :doc:`diverse set of agent patterns `. AGNext provides default agent implementations for common uses, such as chat completion agents, but also allows for fully custom agents. +AGNext's developer API consists of the following layers: + +- :doc:`core ` - The core interfaces that defines agent + and runtime. +- :doc:`application ` - Implementations of the runtime + and other modules (e.g., logging) for building applications. +- :doc:`components ` - Interfaces and implementations + for agents, models, memory, and tools. +- :doc:`chat ` - High-level API for creating demos and + experimenting with multi-agent patterns. It offers pre-built agents, patterns, + message types, and memory stores. + + + .. toctree:: :caption: Getting started :hidden: diff --git a/python/examples/README.md b/python/examples/README.md index 83e8640ce9..322f171ce4 100644 --- a/python/examples/README.md +++ b/python/examples/README.md @@ -2,6 +2,33 @@ This directory contains examples of how to use AGNext. +We provide examples that use pre-built agents and message types in the `chat` layer. +These examples are intended for users who want to quickly create +demos and experimenting with multi-agent design paterns. + +- `coder_reviewer.py`: using a coder and reviewer agents to implement the + reflection pattern for code generation. +- `illustrator_critics.py`: using an illustrator, critics and descriptor agent + to implement the reflection pattern for image generation. +- `chest_game.py`: using two chess player agents to demonstrate tool use and reflection + on tool use. +- `assistant.py`: a demonstration of how to use the OpenAI Assistant API to create + a ChatGPT agent. +- `software_consultancy.py`: a demonstration of multi-agent interaction using + the group chat pattern. +- `orchestrator.py`: a demonstration of multi-agent problem solving using + the orchestrator pattern. + +We also provide examples that use only the `core`, `application`, and `components` layers. +These examples are intended for advanced users who want to create +custom agents and message types for building applications. + +- `inner_outer.py`: An example of how to create an inner and outer custom agent. +- `chat_room.py`: An example of how to create a chat room of custom agents without + a centralized orchestrator. + +## Running the examples + First, you need a shell with AGNext and the examples dependencies installed. To do this, run: ```bash @@ -16,6 +43,7 @@ python coder_reviewer.py ``` Or simply: + ```bash hatch run python coder_reviewer.py ```