docs: Core API doc update: split out model context from model clients; separate framework and components (#5171)

This commit is contained in:
Eric Zhu
2025-01-24 11:17:07 -08:00
committed by GitHub
parent b375d4b18c
commit 146674399b
12 changed files with 563 additions and 503 deletions

View File

@@ -175,6 +175,9 @@ nb_mime_priority_overrides = [
rediraffe_redirects = {
"user-guide/agentchat-user-guide/tutorial/selector-group-chat.ipynb": "user-guide/agentchat-user-guide/selector-group-chat.ipynb",
"user-guide/agentchat-user-guide/tutorial/swarm.ipynb": "user-guide/agentchat-user-guide/swarm.ipynb",
"user-guide/core-user-guide/framework/command-line-code-executors.ipynb": "user-guide/core-user-guide/components/command-line-code-executors.ipynb",
"user-guide/core-user-guide/framework/model-clients.ipynb": "user-guide/core-user-guide/components/model-clients.ipynb",
"user-guide/core-user-guide/framework/tools.ipynb": "user-guide/core-user-guide/components/tools.ipynb",
}

View File

@@ -157,7 +157,7 @@ custom_model_client = OpenAIChatCompletionClient(
> Please test them before using them.
Read about [Model Clients](./tutorial/models.ipynb)
in AgentChat Tutorial and more detailed information on [Core API Docs](../core-user-guide/framework/model-clients.ipynb)
in AgentChat Tutorial and more detailed information on [Core API Docs](../core-user-guide/components/model-clients.ipynb)
Support for other hosted models will be added in the future.
@@ -1231,7 +1231,7 @@ in the Core API documentation for more details.
The code executors in `v0.2` and `v0.4` are nearly identical except
the `v0.4` executors support async API. You can also use
{py:class}`~autogen_core.CancellationToken` to cancel a code execution if it takes too long.
See [Command Line Code Executors Tutorial](../core-user-guide/framework/command-line-code-executors.ipynb)
See [Command Line Code Executors Tutorial](../core-user-guide/components/command-line-code-executors.ipynb)
in the Core API documentation.
We also added `AzureContainerCodeExecutor` that can use Azure Container Apps (ACA)

View File

@@ -6,7 +6,7 @@
"source": [
"# Models\n",
"\n",
"In many cases, agents need access to LLM model services such as OpenAI, Azure OpenAI, or local models. Since there are many different providers with different APIs, `autogen-core` implements a protocol for [model clients](../../core-user-guide/framework/model-clients.ipynb) and `autogen-ext` implements a set of model clients for popular model services. AgentChat can use these model clients to interact with model services. \n",
"In many cases, agents need access to LLM model services such as OpenAI, Azure OpenAI, or local models. Since there are many different providers with different APIs, `autogen-core` implements a protocol for [model clients](../../core-user-guide/components/model-clients.ipynb) and `autogen-ext` implements a set of model clients for popular model services. AgentChat can use these model clients to interact with model services. \n",
"\n",
"```{note}\n",
"See {py:class}`~autogen_ext.models.cache.ChatCompletionCache` for a caching wrapper to use with the following clients.\n",
@@ -188,4 +188,4 @@
},
"nbformat": 4,
"nbformat_minor": 2
}
}

View File

@@ -103,8 +103,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"### Streaming Response\n",
"\n",
"You can use the {py:meth}`~autogen_ext.models.OpenAIChatCompletionClient.create_streaming` method to create a\n",
@@ -177,15 +175,16 @@
"of the type {py:class}`~autogen_core.models.CreateResult`.\n",
"```\n",
"\n",
"**NB the default usage response is to return zero values**"
"```{note}\n",
"The default usage response is to return zero values\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### A Note on Token usage counts with streaming example\n",
"Comparing usage returns in the above Non Streaming `model_client.create(messages=messages)` vs streaming `model_client.create_stream(messages=messages)` we see differences.\n",
"Comparing usage returns in the above Non Streaming `model_client.create(messages=messages)` vs streaming `model_client.create_stream(messages=messages)` we see differences.\n",
"The non streaming response by default returns valid prompt and completion token usage counts. \n",
"The streamed response by default returns zero values.\n",
"\n",
@@ -195,9 +194,10 @@
"\n",
"to enable this in `create_stream` set `extra_create_args={\"stream_options\": {\"include_usage\": True}},`\n",
"\n",
"- **Note whilst other API's like LiteLLM also support this, it is not always guarenteed that it is fully supported or correct**\n",
"```{note}\n",
"Note whilst other API's like LiteLLM also support this, it is not always guarenteed that it is fully supported or correct.\n",
"\n",
"#### Streaming example with token usage\n"
"See the example below for how to use the `stream_options` parameter to return usage."
]
},
{
@@ -325,7 +325,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Caching Wrapper\n",
"## Caching Model Responses\n",
"\n",
"`autogen_ext` implements {py:class}`~autogen_ext.models.cache.ChatCompletionCache` that can wrap any {py:class}`~autogen_core.models.ChatCompletionClient`. Using this wrapper avoids incurring token usage when querying the underlying client with the same prompt multiple times.\n",
"\n",
@@ -534,152 +534,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Manage Model Context\n",
"\n",
"The above `SimpleAgent` always responds with a fresh context that contains only\n",
"the system message and the latest user's message.\n",
"We can use model context classes from {py:mod}`autogen_core.model_context`\n",
"to make the agent \"remember\" previous conversations.\n",
"A model context supports storage and retrieval of Chat Completion messages.\n",
"It is always used together with a model client to generate LLM-based responses.\n",
"\n",
"For example, {py:mod}`~autogen_core.model_context.BufferedChatCompletionContext`\n",
"is a most-recent-used (MRU) context that stores the most recent `buffer_size`\n",
"number of messages. This is useful to avoid context overflow in many LLMs.\n",
"\n",
"Let's update the previous example to use\n",
"{py:mod}`~autogen_core.model_context.BufferedChatCompletionContext`."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"from autogen_core.model_context import BufferedChatCompletionContext\n",
"from autogen_core.models import AssistantMessage\n",
"\n",
"\n",
"class SimpleAgentWithContext(RoutedAgent):\n",
" def __init__(self, model_client: ChatCompletionClient) -> None:\n",
" super().__init__(\"A simple agent\")\n",
" self._system_messages = [SystemMessage(content=\"You are a helpful AI assistant.\")]\n",
" self._model_client = model_client\n",
" self._model_context = BufferedChatCompletionContext(buffer_size=5)\n",
"\n",
" @message_handler\n",
" async def handle_user_message(self, message: Message, ctx: MessageContext) -> Message:\n",
" # Prepare input to the chat completion model.\n",
" user_message = UserMessage(content=message.content, source=\"user\")\n",
" # Add message to model context.\n",
" await self._model_context.add_message(user_message)\n",
" # Generate a response.\n",
" response = await self._model_client.create(\n",
" self._system_messages + (await self._model_context.get_messages()),\n",
" cancellation_token=ctx.cancellation_token,\n",
" )\n",
" # Return with the model's response.\n",
" assert isinstance(response.content, str)\n",
" # Add message to model context.\n",
" await self._model_context.add_message(AssistantMessage(content=response.content, source=self.metadata[\"type\"]))\n",
" return Message(content=response.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's try to ask follow up questions after the first one."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Question: Hello, what are some fun things to do in Seattle?\n",
"Response: Seattle offers a wide variety of fun activities and attractions for visitors. Here are some highlights:\n",
"\n",
"1. **Pike Place Market**: Explore this iconic market, where you can find fresh produce, unique crafts, and the famous fish-throwing vendors. Dont forget to visit the original Starbucks!\n",
"\n",
"2. **Space Needle**: Enjoy breathtaking views of the city and Mount Rainier from the observation deck of this iconic structure. You can also dine at the SkyCity restaurant.\n",
"\n",
"3. **Chihuly Garden and Glass**: Admire the stunning glass art installations created by artist Dale Chihuly. The garden and exhibit are particularly beautiful, especially in good weather.\n",
"\n",
"4. **Museum of Pop Culture (MoPOP)**: Dive into the world of music, science fiction, and pop culture through interactive exhibits and memorabilia.\n",
"\n",
"5. **Seattle Aquarium**: Located on the waterfront, the aquarium features a variety of marine life native to the Pacific Northwest, including otters and diving birds.\n",
"\n",
"6. **Seattle Art Museum (SAM)**: Explore a diverse collection of art from around the world, including Native American art and contemporary pieces.\n",
"\n",
"7. **Ballard Locks**: Watch boats travel between the Puget Sound and Lake Union, and see salmon navigating the fish ladder during spawning season.\n",
"\n",
"8. **Fremont Troll**: Visit this quirky public art installation located under the Aurora Bridge, where you can take fun photos with the giant troll.\n",
"\n",
"9. **Kerry Park**: For a picturesque view of the Seattle skyline, head to Kerry Park on Queen Anne Hill, especially at sunset.\n",
"\n",
"10. **Take a Ferry Ride**: Enjoy the scenic views while taking a ferry to nearby Bainbridge Island or Vashon Island for a relaxing day trip.\n",
"\n",
"11. **Underground Tour**: Explore Seattles history on an entertaining underground tour in Pioneer Square, where youll learn about the citys early days.\n",
"\n",
"12. **Attend a Sporting Event**: Depending on the season, catch a Seattle Seahawks (NFL) game, a Seattle Mariners (MLB) game, or a Seattle Sounders (MLS) match.\n",
"\n",
"13. **Explore Discovery Park**: Enjoy nature with hiking trails, beach access, and stunning views of the Puget Sound and Olympic Mountains.\n",
"\n",
"14. **West Seattles Alki Beach**: Relax at this beach with beautiful views of the Seattle skyline and enjoy beachside activities like biking or kayaking.\n",
"\n",
"15. **Dining and Craft Beer**: Seattle has a vibrant food scene and is known for its seafood, coffee culture, and craft breweries. Make sure to explore local restaurants and breweries.\n",
"\n",
"Theres something for everyone in Seattle, whether youre interested in nature, art, history, or food!\n",
"-----\n",
"Question: What was the first thing you mentioned?\n",
"Response: The first thing I mentioned was **Pike Place Market**, an iconic market in Seattle where you can find fresh produce, unique crafts, and experience the famous fish-throwing vendors. It's also home to the original Starbucks and various charming shops and eateries.\n"
]
}
],
"source": [
"runtime = SingleThreadedAgentRuntime()\n",
"await SimpleAgentWithContext.register(\n",
" runtime,\n",
" \"simple_agent_context\",\n",
" lambda: SimpleAgentWithContext(\n",
" OpenAIChatCompletionClient(\n",
" model=\"gpt-4o-mini\",\n",
" # api_key=\"sk-...\", # Optional if you have an OPENAI_API_KEY set in the environment.\n",
" )\n",
" ),\n",
")\n",
"# Start the runtime processing messages.\n",
"runtime.start()\n",
"agent_id = AgentId(\"simple_agent_context\", \"default\")\n",
"\n",
"# First question.\n",
"message = Message(\"Hello, what are some fun things to do in Seattle?\")\n",
"print(f\"Question: {message.content}\")\n",
"response = await runtime.send_message(message, agent_id)\n",
"print(f\"Response: {response.content}\")\n",
"print(\"-----\")\n",
"\n",
"# Second question.\n",
"message = Message(\"What was the first thing you mentioned?\")\n",
"print(f\"Question: {message.content}\")\n",
"response = await runtime.send_message(message, agent_id)\n",
"print(f\"Response: {response.content}\")\n",
"\n",
"# Stop the runtime processing messages.\n",
"await runtime.stop()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From the second response, you can see the agent now can recall its own previous responses."
"See the [Model Context](./model-context.ipynb) page for more details."
]
}
],
@@ -699,7 +558,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.1"
"version": "3.12.7"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,190 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Model Context\n",
"\n",
"A model context supports storage and retrieval of Chat Completion messages.\n",
"It is always used together with a model client to generate LLM-based responses.\n",
"\n",
"For example, {py:mod}`~autogen_core.model_context.BufferedChatCompletionContext`\n",
"is a most-recent-used (MRU) context that stores the most recent `buffer_size`\n",
"number of messages. This is useful to avoid context overflow in many LLMs.\n",
"\n",
"Let's see an example that uses\n",
"{py:mod}`~autogen_core.model_context.BufferedChatCompletionContext`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from dataclasses import dataclass\n",
"\n",
"from autogen_core import AgentId, MessageContext, RoutedAgent, SingleThreadedAgentRuntime, message_handler\n",
"from autogen_core.model_context import BufferedChatCompletionContext\n",
"from autogen_core.models import AssistantMessage, ChatCompletionClient, SystemMessage, UserMessage\n",
"from autogen_ext.models.openai import OpenAIChatCompletionClient"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"@dataclass\n",
"class Message:\n",
" content: str"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"class SimpleAgentWithContext(RoutedAgent):\n",
" def __init__(self, model_client: ChatCompletionClient) -> None:\n",
" super().__init__(\"A simple agent\")\n",
" self._system_messages = [SystemMessage(content=\"You are a helpful AI assistant.\")]\n",
" self._model_client = model_client\n",
" self._model_context = BufferedChatCompletionContext(buffer_size=5)\n",
"\n",
" @message_handler\n",
" async def handle_user_message(self, message: Message, ctx: MessageContext) -> Message:\n",
" # Prepare input to the chat completion model.\n",
" user_message = UserMessage(content=message.content, source=\"user\")\n",
" # Add message to model context.\n",
" await self._model_context.add_message(user_message)\n",
" # Generate a response.\n",
" response = await self._model_client.create(\n",
" self._system_messages + (await self._model_context.get_messages()),\n",
" cancellation_token=ctx.cancellation_token,\n",
" )\n",
" # Return with the model's response.\n",
" assert isinstance(response.content, str)\n",
" # Add message to model context.\n",
" await self._model_context.add_message(AssistantMessage(content=response.content, source=self.metadata[\"type\"]))\n",
" return Message(content=response.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's try to ask follow up questions after the first one."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Question: Hello, what are some fun things to do in Seattle?\n",
"Response: Seattle offers a variety of fun activities and attractions. Here are some highlights:\n",
"\n",
"1. **Pike Place Market**: Visit this iconic market to explore local vendors, fresh produce, artisanal products, and watch the famous fish throwing.\n",
"\n",
"2. **Space Needle**: Take a trip to the observation deck for stunning panoramic views of the city, Puget Sound, and the surrounding mountains.\n",
"\n",
"3. **Chihuly Garden and Glass**: Marvel at the stunning glass art installations created by artist Dale Chihuly, located right next to the Space Needle.\n",
"\n",
"4. **Seattle Waterfront**: Enjoy a stroll along the waterfront, visit the Seattle Aquarium, and take a ferry ride to nearby islands like Bainbridge Island.\n",
"\n",
"5. **Museum of Pop Culture (MoPOP)**: Explore exhibits on music, science fiction, and pop culture in this architecturally striking building.\n",
"\n",
"6. **Seattle Art Museum (SAM)**: Discover an extensive collection of art from around the world, including contemporary and Native American art.\n",
"\n",
"7. **Gas Works Park**: Relax in this unique park that features remnants of an old gasification plant, offering great views of the Seattle skyline and Lake Union.\n",
"\n",
"8. **Discovery Park**: Enjoy nature trails, beaches, and beautiful views of the Puget Sound and the Olympic Mountains in this large urban park.\n",
"\n",
"9. **Ballard Locks**: Watch boats navigate the locks and see fish swimming upstream during the salmon migration season.\n",
"\n",
"10. **Fremont Troll**: Check out this quirky public art installation under a bridge in the Fremont neighborhood.\n",
"\n",
"11. **Underground Tour**: Take an entertaining guided tour through the underground passages of Pioneer Square to learn about Seattle's history.\n",
"\n",
"12. **Brewery Tours**: Seattle is known for its craft beer scene. Visit local breweries for tastings and tours.\n",
"\n",
"13. **Seattle Center**: Explore the cultural complex that includes the Space Needle, MoPOP, and various festivals and events throughout the year.\n",
"\n",
"These are just a few options, and Seattle has something for everyone, whether you're into outdoor activities, culture, history, or food!\n",
"-----\n",
"Question: What was the first thing you mentioned?\n",
"Response: The first thing I mentioned was **Pike Place Market**. It's an iconic market in Seattle known for its local vendors, fresh produce, artisanal products, and the famous fish throwing by the fishmongers. It's a vibrant place full of sights, sounds, and delicious food.\n"
]
}
],
"source": [
"runtime = SingleThreadedAgentRuntime()\n",
"await SimpleAgentWithContext.register(\n",
" runtime,\n",
" \"simple_agent_context\",\n",
" lambda: SimpleAgentWithContext(\n",
" OpenAIChatCompletionClient(\n",
" model=\"gpt-4o-mini\",\n",
" # api_key=\"sk-...\", # Optional if you have an OPENAI_API_KEY set in the environment.\n",
" )\n",
" ),\n",
")\n",
"# Start the runtime processing messages.\n",
"runtime.start()\n",
"agent_id = AgentId(\"simple_agent_context\", \"default\")\n",
"\n",
"# First question.\n",
"message = Message(\"Hello, what are some fun things to do in Seattle?\")\n",
"print(f\"Question: {message.content}\")\n",
"response = await runtime.send_message(message, agent_id)\n",
"print(f\"Response: {response.content}\")\n",
"print(\"-----\")\n",
"\n",
"# Second question.\n",
"message = Message(\"What was the first thing you mentioned?\")\n",
"print(f\"Question: {message.content}\")\n",
"response = await runtime.send_message(message, agent_id)\n",
"print(f\"Response: {response.content}\")\n",
"\n",
"# Stop the runtime processing messages.\n",
"await runtime.stop()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From the second response, you can see the agent now can recall its own previous responses."
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -9,7 +9,7 @@ like software development.
A multi-agent design pattern is a structure that emerges from message protocols:
it describes how agents interact with each other to solve problems.
For example, the [tool-equipped agent](../framework/tools.ipynb#tool-equipped-agent) in
For example, the [tool-equipped agent](../components/tools.ipynb#tool-equipped-agent) in
the previous section employs a design pattern called ReAct,
which involves an agent interacting with tools.

View File

@@ -1,345 +1,345 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Agent and Agent Runtime\n",
"\n",
"In this and the following section, we focus on the core concepts of AutoGen:\n",
"agents, agent runtime, messages, and communication -- \n",
"the foundational building blocks for an multi-agent applications.\n",
"\n",
"```{note}\n",
"The Core API is designed to be unopinionated and flexible. So at times, you\n",
"may find it challenging. Continue if you are building\n",
"an interactive, scalable and distributed multi-agent system and want full control\n",
"of all workflows.\n",
"If you just want to get something running\n",
"quickly, you may take a look at the [AgentChat API](../../agentchat-user-guide/index.md).\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"An agent in AutoGen is an entity defined by the base interface {py:class}`~autogen_core.Agent`.\n",
"It has a unique identifier of the type {py:class}`~autogen_core.AgentId`,\n",
"a metadata dictionary of the type {py:class}`~autogen_core.AgentMetadata`.\n",
"\n",
"In most cases, you can subclass your agents from higher level class {py:class}`~autogen_core.RoutedAgent` which enables you to route messages to corresponding message handler specified with {py:meth}`~autogen_core.message_handler` decorator and proper type hint for the `message` variable.\n",
"An agent runtime is the execution environment for agents in AutoGen.\n",
"\n",
"Similar to the runtime environment of a programming language,\n",
"an agent runtime provides the necessary infrastructure to facilitate communication\n",
"between agents, manage agent lifecycles, enforce security boundaries, and support monitoring and\n",
"debugging.\n",
"\n",
"For local development, developers can use {py:class}`~autogen_core.SingleThreadedAgentRuntime`,\n",
"which can be embedded in a Python application.\n",
"\n",
"```{note}\n",
"Agents are not directly instantiated and managed by application code.\n",
"Instead, they are created by the runtime when needed and managed by the runtime.\n",
"\n",
"If you are already familiar with [AgentChat](../../agentchat-user-guide/index.md),\n",
"it is important to note that AgentChat's agents such as\n",
"{py:class}`~autogen_agentchat.agents.AssistantAgent` are created by application \n",
"and thus not directly managed by the runtime. To use an AgentChat agent in Core,\n",
"you need to create a wrapper Core agent that delegates messages to the AgentChat agent\n",
"and let the runtime manage the wrapper agent.\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Implementing an Agent\n",
"\n",
"To implement an agent, the developer must subclass the {py:class}`~autogen_core.RoutedAgent` class\n",
"and implement a message handler method for each message type the agent is expected to handle using\n",
"the {py:meth}`~autogen_core.message_handler` decorator.\n",
"For example,\n",
"the following agent handles a simple message type `MyMessageType` and prints the message it receives:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"from dataclasses import dataclass\n",
"\n",
"from autogen_core import AgentId, MessageContext, RoutedAgent, message_handler\n",
"\n",
"\n",
"@dataclass\n",
"class MyMessageType:\n",
" content: str\n",
"\n",
"\n",
"class MyAgent(RoutedAgent):\n",
" def __init__(self) -> None:\n",
" super().__init__(\"MyAgent\")\n",
"\n",
" @message_handler\n",
" async def handle_my_message_type(self, message: MyMessageType, ctx: MessageContext) -> None:\n",
" print(f\"{self.id.type} received message: {message.content}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This agent only handles `MyMessageType` and messages will be delivered to `handle_my_message_type` method. Developers can have multiple message handlers for different message types by using {py:meth}`~autogen_core.message_handler` decorator and setting the type hint for the `message` variable in the handler function. You can also leverage [python typing union](https://docs.python.org/3/library/typing.html#typing.Union) for the `message` variable in one message handler function if it better suits agent's logic.\n",
"See the next section on [message and communication](./message-and-communication.ipynb)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using an AgentChat Agent\n",
"\n",
"If you have an [AgentChat](../../agentchat-user-guide/index.md) agent and want to use it in the Core API, you can create\n",
"a wrapper {py:class}`~autogen_core.RoutedAgent` that delegates messages to the AgentChat agent.\n",
"The following example shows how to create a wrapper agent for the {py:class}`~autogen_agentchat.agents.AssistantAgent`\n",
"in AgentChat."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"from autogen_agentchat.agents import AssistantAgent\n",
"from autogen_agentchat.messages import TextMessage\n",
"from autogen_ext.models.openai import OpenAIChatCompletionClient\n",
"\n",
"\n",
"class MyAssistant(RoutedAgent):\n",
" def __init__(self, name: str) -> None:\n",
" super().__init__(name)\n",
" model_client = OpenAIChatCompletionClient(model=\"gpt-4o\")\n",
" self._delegate = AssistantAgent(name, model_client=model_client)\n",
"\n",
" @message_handler\n",
" async def handle_my_message_type(self, message: MyMessageType, ctx: MessageContext) -> None:\n",
" print(f\"{self.id.type} received message: {message.content}\")\n",
" response = await self._delegate.on_messages(\n",
" [TextMessage(content=message.content, source=\"user\")], ctx.cancellation_token\n",
" )\n",
" print(f\"{self.id.type} responded: {response.chat_message.content}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For how to use model client, see the [Model Client](./model-clients.ipynb) section.\n",
"\n",
"Since the Core API is unopinionated,\n",
"you are not required to use the AgentChat API to use the Core API.\n",
"You can implement your own agents or use another agent framework."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Registering Agent Type\n",
"\n",
"To make agents available to the runtime, developers can use the\n",
"{py:meth}`~autogen_core.BaseAgent.register` class method of the\n",
"{py:class}`~autogen_core.BaseAgent` class.\n",
"The process of registration associates an agent type, which is uniquely identified by a string, \n",
"and a factory function\n",
"that creates an instance of the agent type of the given class.\n",
"The factory function is used to allow automatic creation of agent instances \n",
"when they are needed.\n",
"\n",
"Agent type ({py:class}`~autogen_core.AgentType`) is not the same as the agent class. In this example,\n",
"the agent type is `AgentType(\"my_agent\")` or `AgentType(\"my_assistant\")` and the agent class is the Python class `MyAgent` or `MyAssistantAgent`.\n",
"The factory function is expected to return an instance of the agent class \n",
"on which the {py:meth}`~autogen_core.BaseAgent.register` class method is invoked.\n",
"Read [Agent Identity and Lifecycles](../core-concepts/agent-identity-and-lifecycle.md)\n",
"to learn more about agent type and identity.\n",
"\n",
"```{note}\n",
"Different agent types can be registered with factory functions that return \n",
"the same agent class. For example, in the factory functions, \n",
"variations of the constructor parameters\n",
"can be used to create different instances of the same agent class.\n",
"```\n",
"\n",
"To register our agent types with the \n",
"{py:class}`~autogen_core.SingleThreadedAgentRuntime`,\n",
"the following code can be used:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AgentType(type='my_assistant')"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from autogen_core import SingleThreadedAgentRuntime\n",
"\n",
"runtime = SingleThreadedAgentRuntime()\n",
"await MyAgent.register(runtime, \"my_agent\", lambda: MyAgent())\n",
"await MyAssistant.register(runtime, \"my_assistant\", lambda: MyAssistant(\"my_assistant\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once an agent type is registered, we can send a direct message to an agent instance\n",
"using an {py:class}`~autogen_core.AgentId`.\n",
"The runtime will create the instance the first time it delivers a\n",
"message to this instance."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"my_agent received message: Hello, World!\n",
"my_assistant received message: Hello, World!\n",
"my_assistant responded: Hello! How can I assist you today?\n"
]
}
],
"source": [
"runtime.start() # Start processing messages in the background.\n",
"await runtime.send_message(MyMessageType(\"Hello, World!\"), AgentId(\"my_agent\", \"default\"))\n",
"await runtime.send_message(MyMessageType(\"Hello, World!\"), AgentId(\"my_assistant\", \"default\"))\n",
"await runtime.stop() # Stop processing messages in the background."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```{note}\n",
"Because the runtime manages the lifecycle of agents, an {py:class}`~autogen_core.AgentId`\n",
"is only used to communicate with the agent or retrieve its metadata (e.g., description).\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Running the Single-Threaded Agent Runtime\n",
"\n",
"The above code snippet uses {py:meth}`~autogen_core.SingleThreadedAgentRuntime.start` to start a background task\n",
"to process and deliver messages to recepients' message handlers.\n",
"This is a feature of the\n",
"local embedded runtime {py:class}`~autogen_core.SingleThreadedAgentRuntime`.\n",
"\n",
"To stop the background task immediately, use the {py:meth}`~autogen_core.SingleThreadedAgentRuntime.stop` method:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"runtime.start()\n",
"# ... Send messages, publish messages, etc.\n",
"await runtime.stop() # This will return immediately but will not cancel\n",
"# any in-progress message handling."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can resume the background task by calling {py:meth}`~autogen_core.SingleThreadedAgentRuntime.start` again.\n",
"\n",
"For batch scenarios such as running benchmarks for evaluating agents,\n",
"you may want to wait for the background task to stop automatically when\n",
"there are no unprocessed messages and no agent is handling messages --\n",
"the batch may considered complete.\n",
"You can achieve this by using the {py:meth}`~autogen_core.SingleThreadedAgentRuntime.stop_when_idle` method:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"runtime.start()\n",
"# ... Send messages, publish messages, etc.\n",
"await runtime.stop_when_idle() # This will block until the runtime is idle."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To close the runtime and release resources, use the {py:meth}`~autogen_core.SingleThreadedAgentRuntime.close` method:"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"await runtime.close()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Other runtime implementations will have their own ways of running the runtime."
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Agent and Agent Runtime\n",
"\n",
"In this and the following section, we focus on the core concepts of AutoGen:\n",
"agents, agent runtime, messages, and communication -- \n",
"the foundational building blocks for an multi-agent applications.\n",
"\n",
"```{note}\n",
"The Core API is designed to be unopinionated and flexible. So at times, you\n",
"may find it challenging. Continue if you are building\n",
"an interactive, scalable and distributed multi-agent system and want full control\n",
"of all workflows.\n",
"If you just want to get something running\n",
"quickly, you may take a look at the [AgentChat API](../../agentchat-user-guide/index.md).\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"An agent in AutoGen is an entity defined by the base interface {py:class}`~autogen_core.Agent`.\n",
"It has a unique identifier of the type {py:class}`~autogen_core.AgentId`,\n",
"a metadata dictionary of the type {py:class}`~autogen_core.AgentMetadata`.\n",
"\n",
"In most cases, you can subclass your agents from higher level class {py:class}`~autogen_core.RoutedAgent` which enables you to route messages to corresponding message handler specified with {py:meth}`~autogen_core.message_handler` decorator and proper type hint for the `message` variable.\n",
"An agent runtime is the execution environment for agents in AutoGen.\n",
"\n",
"Similar to the runtime environment of a programming language,\n",
"an agent runtime provides the necessary infrastructure to facilitate communication\n",
"between agents, manage agent lifecycles, enforce security boundaries, and support monitoring and\n",
"debugging.\n",
"\n",
"For local development, developers can use {py:class}`~autogen_core.SingleThreadedAgentRuntime`,\n",
"which can be embedded in a Python application.\n",
"\n",
"```{note}\n",
"Agents are not directly instantiated and managed by application code.\n",
"Instead, they are created by the runtime when needed and managed by the runtime.\n",
"\n",
"If you are already familiar with [AgentChat](../../agentchat-user-guide/index.md),\n",
"it is important to note that AgentChat's agents such as\n",
"{py:class}`~autogen_agentchat.agents.AssistantAgent` are created by application \n",
"and thus not directly managed by the runtime. To use an AgentChat agent in Core,\n",
"you need to create a wrapper Core agent that delegates messages to the AgentChat agent\n",
"and let the runtime manage the wrapper agent.\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Implementing an Agent\n",
"\n",
"To implement an agent, the developer must subclass the {py:class}`~autogen_core.RoutedAgent` class\n",
"and implement a message handler method for each message type the agent is expected to handle using\n",
"the {py:meth}`~autogen_core.message_handler` decorator.\n",
"For example,\n",
"the following agent handles a simple message type `MyMessageType` and prints the message it receives:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"from dataclasses import dataclass\n",
"\n",
"from autogen_core import AgentId, MessageContext, RoutedAgent, message_handler\n",
"\n",
"\n",
"@dataclass\n",
"class MyMessageType:\n",
" content: str\n",
"\n",
"\n",
"class MyAgent(RoutedAgent):\n",
" def __init__(self) -> None:\n",
" super().__init__(\"MyAgent\")\n",
"\n",
" @message_handler\n",
" async def handle_my_message_type(self, message: MyMessageType, ctx: MessageContext) -> None:\n",
" print(f\"{self.id.type} received message: {message.content}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This agent only handles `MyMessageType` and messages will be delivered to `handle_my_message_type` method. Developers can have multiple message handlers for different message types by using {py:meth}`~autogen_core.message_handler` decorator and setting the type hint for the `message` variable in the handler function. You can also leverage [python typing union](https://docs.python.org/3/library/typing.html#typing.Union) for the `message` variable in one message handler function if it better suits agent's logic.\n",
"See the next section on [message and communication](./message-and-communication.ipynb)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Using an AgentChat Agent\n",
"\n",
"If you have an [AgentChat](../../agentchat-user-guide/index.md) agent and want to use it in the Core API, you can create\n",
"a wrapper {py:class}`~autogen_core.RoutedAgent` that delegates messages to the AgentChat agent.\n",
"The following example shows how to create a wrapper agent for the {py:class}`~autogen_agentchat.agents.AssistantAgent`\n",
"in AgentChat."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"from autogen_agentchat.agents import AssistantAgent\n",
"from autogen_agentchat.messages import TextMessage\n",
"from autogen_ext.models.openai import OpenAIChatCompletionClient\n",
"\n",
"\n",
"class MyAssistant(RoutedAgent):\n",
" def __init__(self, name: str) -> None:\n",
" super().__init__(name)\n",
" model_client = OpenAIChatCompletionClient(model=\"gpt-4o\")\n",
" self._delegate = AssistantAgent(name, model_client=model_client)\n",
"\n",
" @message_handler\n",
" async def handle_my_message_type(self, message: MyMessageType, ctx: MessageContext) -> None:\n",
" print(f\"{self.id.type} received message: {message.content}\")\n",
" response = await self._delegate.on_messages(\n",
" [TextMessage(content=message.content, source=\"user\")], ctx.cancellation_token\n",
" )\n",
" print(f\"{self.id.type} responded: {response.chat_message.content}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For how to use model client, see the [Model Client](../components/model-clients.ipynb) section.\n",
"\n",
"Since the Core API is unopinionated,\n",
"you are not required to use the AgentChat API to use the Core API.\n",
"You can implement your own agents or use another agent framework."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Registering Agent Type\n",
"\n",
"To make agents available to the runtime, developers can use the\n",
"{py:meth}`~autogen_core.BaseAgent.register` class method of the\n",
"{py:class}`~autogen_core.BaseAgent` class.\n",
"The process of registration associates an agent type, which is uniquely identified by a string, \n",
"and a factory function\n",
"that creates an instance of the agent type of the given class.\n",
"The factory function is used to allow automatic creation of agent instances \n",
"when they are needed.\n",
"\n",
"Agent type ({py:class}`~autogen_core.AgentType`) is not the same as the agent class. In this example,\n",
"the agent type is `AgentType(\"my_agent\")` or `AgentType(\"my_assistant\")` and the agent class is the Python class `MyAgent` or `MyAssistantAgent`.\n",
"The factory function is expected to return an instance of the agent class \n",
"on which the {py:meth}`~autogen_core.BaseAgent.register` class method is invoked.\n",
"Read [Agent Identity and Lifecycles](../core-concepts/agent-identity-and-lifecycle.md)\n",
"to learn more about agent type and identity.\n",
"\n",
"```{note}\n",
"Different agent types can be registered with factory functions that return \n",
"the same agent class. For example, in the factory functions, \n",
"variations of the constructor parameters\n",
"can be used to create different instances of the same agent class.\n",
"```\n",
"\n",
"To register our agent types with the \n",
"{py:class}`~autogen_core.SingleThreadedAgentRuntime`,\n",
"the following code can be used:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AgentType(type='my_assistant')"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from autogen_core import SingleThreadedAgentRuntime\n",
"\n",
"runtime = SingleThreadedAgentRuntime()\n",
"await MyAgent.register(runtime, \"my_agent\", lambda: MyAgent())\n",
"await MyAssistant.register(runtime, \"my_assistant\", lambda: MyAssistant(\"my_assistant\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once an agent type is registered, we can send a direct message to an agent instance\n",
"using an {py:class}`~autogen_core.AgentId`.\n",
"The runtime will create the instance the first time it delivers a\n",
"message to this instance."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"my_agent received message: Hello, World!\n",
"my_assistant received message: Hello, World!\n",
"my_assistant responded: Hello! How can I assist you today?\n"
]
}
],
"source": [
"runtime.start() # Start processing messages in the background.\n",
"await runtime.send_message(MyMessageType(\"Hello, World!\"), AgentId(\"my_agent\", \"default\"))\n",
"await runtime.send_message(MyMessageType(\"Hello, World!\"), AgentId(\"my_assistant\", \"default\"))\n",
"await runtime.stop() # Stop processing messages in the background."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```{note}\n",
"Because the runtime manages the lifecycle of agents, an {py:class}`~autogen_core.AgentId`\n",
"is only used to communicate with the agent or retrieve its metadata (e.g., description).\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Running the Single-Threaded Agent Runtime\n",
"\n",
"The above code snippet uses {py:meth}`~autogen_core.SingleThreadedAgentRuntime.start` to start a background task\n",
"to process and deliver messages to recepients' message handlers.\n",
"This is a feature of the\n",
"local embedded runtime {py:class}`~autogen_core.SingleThreadedAgentRuntime`.\n",
"\n",
"To stop the background task immediately, use the {py:meth}`~autogen_core.SingleThreadedAgentRuntime.stop` method:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"runtime.start()\n",
"# ... Send messages, publish messages, etc.\n",
"await runtime.stop() # This will return immediately but will not cancel\n",
"# any in-progress message handling."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can resume the background task by calling {py:meth}`~autogen_core.SingleThreadedAgentRuntime.start` again.\n",
"\n",
"For batch scenarios such as running benchmarks for evaluating agents,\n",
"you may want to wait for the background task to stop automatically when\n",
"there are no unprocessed messages and no agent is handling messages --\n",
"the batch may considered complete.\n",
"You can achieve this by using the {py:meth}`~autogen_core.SingleThreadedAgentRuntime.stop_when_idle` method:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"runtime.start()\n",
"# ... Send messages, publish messages, etc.\n",
"await runtime.stop_when_idle() # This will block until the runtime is idle."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To close the runtime and release resources, use the {py:meth}`~autogen_core.SingleThreadedAgentRuntime.close` method:"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"await runtime.close()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Other runtime implementations will have their own ways of running the runtime."
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -34,15 +34,23 @@ core-concepts/topic-and-subscription
framework/agent-and-agent-runtime
framework/message-and-communication
framework/model-clients
framework/tools
framework/logging
framework/telemetry
framework/command-line-code-executors
framework/distributed-agent-runtime
framework/component-config
```
```{toctree}
:maxdepth: 1
:hidden:
:caption: Components Guide
components/model-clients
components/model-context
components/tools
components/command-line-code-executors
```
```{toctree}
:maxdepth: 1
:hidden:

View File

@@ -81,5 +81,5 @@ pip install "autogen-ext[azure]"
We recommend using Docker to use {py:class}`~autogen_ext.code_executors.docker.DockerCommandLineCodeExecutor` for execution of model-generated code.
To install Docker, follow the instructions for your operating system on the [Docker website](https://docs.docker.com/get-docker/).
To learn more code execution, see [Command Line Code Executors](./framework/command-line-code-executors.ipynb)
To learn more code execution, see [Command Line Code Executors](./components/command-line-code-executors.ipynb)
and [Code Execution](./design-patterns/code-execution-groupchat.ipynb).