Define AgentEvent, rename tool call messages to events. (#4750)

* Define AgentEvent, rename tool call messages to events.

* update doc

* Use AgentEvent | ChatMessage to replace AgentMessage

* Update docs

* update deprecation notice

* remove unused

* fix doc

* format
This commit is contained in:
Eric Zhu
2024-12-18 14:09:19 -08:00
committed by GitHub
parent 7a7eb7449a
commit e902e94b14
34 changed files with 3642 additions and 3615 deletions

View File

@@ -12,7 +12,7 @@
"- {py:attr}`~autogen_agentchat.agents.BaseChatAgent.name`: The unique name of the agent.\n",
"- {py:attr}`~autogen_agentchat.agents.BaseChatAgent.description`: The description of the agent in text.\n",
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages`: Send the agent a sequence of {py:class}`~autogen_agentchat.messages.ChatMessage` get a {py:class}`~autogen_agentchat.base.Response`.\n",
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream`: Same as {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` but returns an iterator of {py:class}`~autogen_agentchat.messages.AgentMessage` followed by a {py:class}`~autogen_agentchat.base.Response` as the last item.\n",
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream`: Same as {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` but returns an iterator of {py:class}`~autogen_agentchat.messages.AgentEvent` or {py:class}`~autogen_agentchat.messages.ChatMessage` followed by a {py:class}`~autogen_agentchat.base.Response` as the last item.\n",
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_reset`: Reset the agent to its initial state.\n",
"\n",
"See {py:mod}`autogen_agentchat.messages` for more information on AgentChat message types.\n",
@@ -74,7 +74,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"[ToolCallMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=61, completion_tokens=15), content=[FunctionCall(id='call_hqVC7UJUPhKaiJwgVKkg66ak', arguments='{\"query\":\"AutoGen\"}', name='web_search')]), ToolCallResultMessage(source='assistant', models_usage=None, content=[FunctionExecutionResult(content='AutoGen is a programming framework for building multi-agent applications.', call_id='call_hqVC7UJUPhKaiJwgVKkg66ak')])]\n",
"[ToolCallRequestEvent(source='assistant', models_usage=RequestUsage(prompt_tokens=61, completion_tokens=15), content=[FunctionCall(id='call_hqVC7UJUPhKaiJwgVKkg66ak', arguments='{\"query\":\"AutoGen\"}', name='web_search')]), ToolCallExecutionEvent(source='assistant', models_usage=None, content=[FunctionExecutionResult(content='AutoGen is a programming framework for building multi-agent applications.', call_id='call_hqVC7UJUPhKaiJwgVKkg66ak')])]\n",
"source='assistant' models_usage=RequestUsage(prompt_tokens=92, completion_tokens=14) content='AutoGen is a programming framework designed for building multi-agent applications.'\n"
]
}

View File

@@ -1,313 +1,313 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Custom Agents\n",
"\n",
"You may have agents with behaviors that do not fall into a preset. \n",
"In such cases, you can build custom agents.\n",
"\n",
"All agents in AgentChat inherit from {py:class}`~autogen_agentchat.agents.BaseChatAgent` \n",
"class and implement the following abstract methods and attributes:\n",
"\n",
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages`: The abstract method that defines the behavior of the agent in response to messages. This method is called when the agent is asked to provide a response in {py:meth}`~autogen_agentchat.agents.BaseChatAgent.run`. It returns a {py:class}`~autogen_agentchat.base.Response` object.\n",
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_reset`: The abstract method that resets the agent to its initial state. This method is called when the agent is asked to reset itself.\n",
"- {py:attr}`~autogen_agentchat.agents.BaseChatAgent.produced_message_types`: The list of possible {py:class}`~autogen_agentchat.messages.ChatMessage` message types the agent can produce in its response.\n",
"\n",
"Optionally, you can implement the the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream` method to stream messages as they are generated by the agent. If this method is not implemented, the agent\n",
"uses the default implementation of {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream`\n",
"that calls the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` method and\n",
"yields all messages in the response."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## CountDownAgent\n",
"\n",
"In this example, we create a simple agent that counts down from a given number to zero,\n",
"and produces a stream of messages with the current count."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"3...\n",
"2...\n",
"1...\n",
"Done!\n"
]
}
],
"source": [
"from typing import AsyncGenerator, List, Sequence\n",
"\n",
"from autogen_agentchat.agents import BaseChatAgent\n",
"from autogen_agentchat.base import Response\n",
"from autogen_agentchat.messages import AgentMessage, ChatMessage, TextMessage\n",
"from autogen_core import CancellationToken\n",
"\n",
"\n",
"class CountDownAgent(BaseChatAgent):\n",
" def __init__(self, name: str, count: int = 3):\n",
" super().__init__(name, \"A simple agent that counts down.\")\n",
" self._count = count\n",
"\n",
" @property\n",
" def produced_message_types(self) -> List[type[ChatMessage]]:\n",
" return [TextMessage]\n",
"\n",
" async def on_messages(self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken) -> Response:\n",
" # Calls the on_messages_stream.\n",
" response: Response | None = None\n",
" async for message in self.on_messages_stream(messages, cancellation_token):\n",
" if isinstance(message, Response):\n",
" response = message\n",
" assert response is not None\n",
" return response\n",
"\n",
" async def on_messages_stream(\n",
" self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken\n",
" ) -> AsyncGenerator[AgentMessage | Response, None]:\n",
" inner_messages: List[AgentMessage] = []\n",
" for i in range(self._count, 0, -1):\n",
" msg = TextMessage(content=f\"{i}...\", source=self.name)\n",
" inner_messages.append(msg)\n",
" yield msg\n",
" # The response is returned at the end of the stream.\n",
" # It contains the final message and all the inner messages.\n",
" yield Response(chat_message=TextMessage(content=\"Done!\", source=self.name), inner_messages=inner_messages)\n",
"\n",
" async def on_reset(self, cancellation_token: CancellationToken) -> None:\n",
" pass\n",
"\n",
"\n",
"async def run_countdown_agent() -> None:\n",
" # Create a countdown agent.\n",
" countdown_agent = CountDownAgent(\"countdown\")\n",
"\n",
" # Run the agent with a given task and stream the response.\n",
" async for message in countdown_agent.on_messages_stream([], CancellationToken()):\n",
" if isinstance(message, Response):\n",
" print(message.chat_message.content)\n",
" else:\n",
" print(message.content)\n",
"\n",
"\n",
"# Use asyncio.run(run_countdown_agent()) when running in a script.\n",
"await run_countdown_agent()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ArithmeticAgent\n",
"\n",
"In this example, we create an agent class that can perform simple arithmetic operations\n",
"on a given integer. Then, we will use different instances of this agent class\n",
"in a {py:class}`~autogen_agentchat.teams.SelectorGroupChat`\n",
"to transform a given integer into another integer by applying a sequence of arithmetic operations.\n",
"\n",
"The `ArithmeticAgent` class takes an `operator_func` that takes an integer and returns an integer,\n",
"after applying an arithmetic operation to the integer.\n",
"In its `on_messages` method, it applies the `operator_func` to the integer in the input message,\n",
"and returns a response with the result."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from typing import Callable, List, Sequence\n",
"\n",
"from autogen_agentchat.agents import BaseChatAgent\n",
"from autogen_agentchat.base import Response\n",
"from autogen_agentchat.conditions import MaxMessageTermination\n",
"from autogen_agentchat.messages import ChatMessage\n",
"from autogen_agentchat.teams import SelectorGroupChat\n",
"from autogen_agentchat.ui import Console\n",
"from autogen_core import CancellationToken\n",
"from autogen_ext.models.openai import OpenAIChatCompletionClient\n",
"\n",
"\n",
"class ArithmeticAgent(BaseChatAgent):\n",
" def __init__(self, name: str, description: str, operator_func: Callable[[int], int]) -> None:\n",
" super().__init__(name, description=description)\n",
" self._operator_func = operator_func\n",
" self._message_history: List[ChatMessage] = []\n",
"\n",
" @property\n",
" def produced_message_types(self) -> List[type[ChatMessage]]:\n",
" return [TextMessage]\n",
"\n",
" async def on_messages(self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken) -> Response:\n",
" # Update the message history.\n",
" # NOTE: it is possible the messages is an empty list, which means the agent was selected previously.\n",
" self._message_history.extend(messages)\n",
" # Parse the number in the last message.\n",
" assert isinstance(self._message_history[-1], TextMessage)\n",
" number = int(self._message_history[-1].content)\n",
" # Apply the operator function to the number.\n",
" result = self._operator_func(number)\n",
" # Create a new message with the result.\n",
" response_message = TextMessage(content=str(result), source=self.name)\n",
" # Update the message history.\n",
" self._message_history.append(response_message)\n",
" # Return the response.\n",
" return Response(chat_message=response_message)\n",
"\n",
" async def on_reset(self, cancellation_token: CancellationToken) -> None:\n",
" pass"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```{note}\n",
"The `on_messages` method may be called with an empty list of messages, in which\n",
"case it means the agent was called previously and is now being called again,\n",
"without any new messages from the caller. So it is important to keep a history\n",
"of the previous messages received by the agent, and use that history to generate\n",
"the response.\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can create a {py:class}`~autogen_agentchat.teams.SelectorGroupChat` with 5 instances of `ArithmeticAgent`:\n",
"\n",
"- one that adds 1 to the input integer,\n",
"- one that subtracts 1 from the input integer,\n",
"- one that multiplies the input integer by 2,\n",
"- one that divides the input integer by 2 and rounds down to the nearest integer, and\n",
"- one that returns the input integer unchanged.\n",
"\n",
"We then create a {py:class}`~autogen_agentchat.teams.SelectorGroupChat` with these agents,\n",
"and set the appropriate selector settings:\n",
"\n",
"- allow the same agent to be selected consecutively to allow for repeated operations, and\n",
"- customize the selector prompt to tailor the model's response to the specific task."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"---------- user ----------\n",
"Apply the operations to turn the given number into 25.\n",
"---------- user ----------\n",
"10\n",
"---------- multiply_agent ----------\n",
"20\n",
"---------- add_agent ----------\n",
"21\n",
"---------- multiply_agent ----------\n",
"42\n",
"---------- divide_agent ----------\n",
"21\n",
"---------- add_agent ----------\n",
"22\n",
"---------- add_agent ----------\n",
"23\n",
"---------- add_agent ----------\n",
"24\n",
"---------- add_agent ----------\n",
"25\n",
"---------- Summary ----------\n",
"Number of messages: 10\n",
"Finish reason: Maximum number of messages 10 reached, current message count: 10\n",
"Total prompt tokens: 0\n",
"Total completion tokens: 0\n",
"Duration: 2.40 seconds\n"
]
}
],
"source": [
"async def run_number_agents() -> None:\n",
" # Create agents for number operations.\n",
" add_agent = ArithmeticAgent(\"add_agent\", \"Adds 1 to the number.\", lambda x: x + 1)\n",
" multiply_agent = ArithmeticAgent(\"multiply_agent\", \"Multiplies the number by 2.\", lambda x: x * 2)\n",
" subtract_agent = ArithmeticAgent(\"subtract_agent\", \"Subtracts 1 from the number.\", lambda x: x - 1)\n",
" divide_agent = ArithmeticAgent(\"divide_agent\", \"Divides the number by 2 and rounds down.\", lambda x: x // 2)\n",
" identity_agent = ArithmeticAgent(\"identity_agent\", \"Returns the number as is.\", lambda x: x)\n",
"\n",
" # The termination condition is to stop after 10 messages.\n",
" termination_condition = MaxMessageTermination(10)\n",
"\n",
" # Create a selector group chat.\n",
" selector_group_chat = SelectorGroupChat(\n",
" [add_agent, multiply_agent, subtract_agent, divide_agent, identity_agent],\n",
" model_client=OpenAIChatCompletionClient(model=\"gpt-4o\"),\n",
" termination_condition=termination_condition,\n",
" allow_repeated_speaker=True, # Allow the same agent to speak multiple times, necessary for this task.\n",
" selector_prompt=(\n",
" \"Available roles:\\n{roles}\\nTheir job descriptions:\\n{participants}\\n\"\n",
" \"Current conversation history:\\n{history}\\n\"\n",
" \"Please select the most appropriate role for the next message, and only return the role name.\"\n",
" ),\n",
" )\n",
"\n",
" # Run the selector group chat with a given task and stream the response.\n",
" task: List[ChatMessage] = [\n",
" TextMessage(content=\"Apply the operations to turn the given number into 25.\", source=\"user\"),\n",
" TextMessage(content=\"10\", source=\"user\"),\n",
" ]\n",
" stream = selector_group_chat.run_stream(task=task)\n",
" await Console(stream)\n",
"\n",
"\n",
"# Use asyncio.run(run_number_agents()) when running in a script.\n",
"await run_number_agents()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From the output, we can see that the agents have successfully transformed the input integer\n",
"from 10 to 25 by choosing appropriate agents that apply the arithmetic operations in sequence."
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Custom Agents\n",
"\n",
"You may have agents with behaviors that do not fall into a preset. \n",
"In such cases, you can build custom agents.\n",
"\n",
"All agents in AgentChat inherit from {py:class}`~autogen_agentchat.agents.BaseChatAgent` \n",
"class and implement the following abstract methods and attributes:\n",
"\n",
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages`: The abstract method that defines the behavior of the agent in response to messages. This method is called when the agent is asked to provide a response in {py:meth}`~autogen_agentchat.agents.BaseChatAgent.run`. It returns a {py:class}`~autogen_agentchat.base.Response` object.\n",
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_reset`: The abstract method that resets the agent to its initial state. This method is called when the agent is asked to reset itself.\n",
"- {py:attr}`~autogen_agentchat.agents.BaseChatAgent.produced_message_types`: The list of possible {py:class}`~autogen_agentchat.messages.ChatMessage` message types the agent can produce in its response.\n",
"\n",
"Optionally, you can implement the the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream` method to stream messages as they are generated by the agent. If this method is not implemented, the agent\n",
"uses the default implementation of {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream`\n",
"that calls the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` method and\n",
"yields all messages in the response."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## CountDownAgent\n",
"\n",
"In this example, we create a simple agent that counts down from a given number to zero,\n",
"and produces a stream of messages with the current count."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"3...\n",
"2...\n",
"1...\n",
"Done!\n"
]
}
],
"source": [
"from typing import AsyncGenerator, List, Sequence\n",
"\n",
"from autogen_agentchat.agents import BaseChatAgent\n",
"from autogen_agentchat.base import Response\n",
"from autogen_agentchat.messages import AgentEvent, ChatMessage, TextMessage\n",
"from autogen_core import CancellationToken\n",
"\n",
"\n",
"class CountDownAgent(BaseChatAgent):\n",
" def __init__(self, name: str, count: int = 3):\n",
" super().__init__(name, \"A simple agent that counts down.\")\n",
" self._count = count\n",
"\n",
" @property\n",
" def produced_message_types(self) -> List[type[ChatMessage]]:\n",
" return [TextMessage]\n",
"\n",
" async def on_messages(self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken) -> Response:\n",
" # Calls the on_messages_stream.\n",
" response: Response | None = None\n",
" async for message in self.on_messages_stream(messages, cancellation_token):\n",
" if isinstance(message, Response):\n",
" response = message\n",
" assert response is not None\n",
" return response\n",
"\n",
" async def on_messages_stream(\n",
" self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken\n",
" ) -> AsyncGenerator[AgentEvent | ChatMessage | Response, None]:\n",
" inner_messages: List[AgentEvent | ChatMessage] = []\n",
" for i in range(self._count, 0, -1):\n",
" msg = TextMessage(content=f\"{i}...\", source=self.name)\n",
" inner_messages.append(msg)\n",
" yield msg\n",
" # The response is returned at the end of the stream.\n",
" # It contains the final message and all the inner messages.\n",
" yield Response(chat_message=TextMessage(content=\"Done!\", source=self.name), inner_messages=inner_messages)\n",
"\n",
" async def on_reset(self, cancellation_token: CancellationToken) -> None:\n",
" pass\n",
"\n",
"\n",
"async def run_countdown_agent() -> None:\n",
" # Create a countdown agent.\n",
" countdown_agent = CountDownAgent(\"countdown\")\n",
"\n",
" # Run the agent with a given task and stream the response.\n",
" async for message in countdown_agent.on_messages_stream([], CancellationToken()):\n",
" if isinstance(message, Response):\n",
" print(message.chat_message.content)\n",
" else:\n",
" print(message.content)\n",
"\n",
"\n",
"# Use asyncio.run(run_countdown_agent()) when running in a script.\n",
"await run_countdown_agent()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ArithmeticAgent\n",
"\n",
"In this example, we create an agent class that can perform simple arithmetic operations\n",
"on a given integer. Then, we will use different instances of this agent class\n",
"in a {py:class}`~autogen_agentchat.teams.SelectorGroupChat`\n",
"to transform a given integer into another integer by applying a sequence of arithmetic operations.\n",
"\n",
"The `ArithmeticAgent` class takes an `operator_func` that takes an integer and returns an integer,\n",
"after applying an arithmetic operation to the integer.\n",
"In its `on_messages` method, it applies the `operator_func` to the integer in the input message,\n",
"and returns a response with the result."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from typing import Callable, List, Sequence\n",
"\n",
"from autogen_agentchat.agents import BaseChatAgent\n",
"from autogen_agentchat.base import Response\n",
"from autogen_agentchat.conditions import MaxMessageTermination\n",
"from autogen_agentchat.messages import ChatMessage\n",
"from autogen_agentchat.teams import SelectorGroupChat\n",
"from autogen_agentchat.ui import Console\n",
"from autogen_core import CancellationToken\n",
"from autogen_ext.models.openai import OpenAIChatCompletionClient\n",
"\n",
"\n",
"class ArithmeticAgent(BaseChatAgent):\n",
" def __init__(self, name: str, description: str, operator_func: Callable[[int], int]) -> None:\n",
" super().__init__(name, description=description)\n",
" self._operator_func = operator_func\n",
" self._message_history: List[ChatMessage] = []\n",
"\n",
" @property\n",
" def produced_message_types(self) -> List[type[ChatMessage]]:\n",
" return [TextMessage]\n",
"\n",
" async def on_messages(self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken) -> Response:\n",
" # Update the message history.\n",
" # NOTE: it is possible the messages is an empty list, which means the agent was selected previously.\n",
" self._message_history.extend(messages)\n",
" # Parse the number in the last message.\n",
" assert isinstance(self._message_history[-1], TextMessage)\n",
" number = int(self._message_history[-1].content)\n",
" # Apply the operator function to the number.\n",
" result = self._operator_func(number)\n",
" # Create a new message with the result.\n",
" response_message = TextMessage(content=str(result), source=self.name)\n",
" # Update the message history.\n",
" self._message_history.append(response_message)\n",
" # Return the response.\n",
" return Response(chat_message=response_message)\n",
"\n",
" async def on_reset(self, cancellation_token: CancellationToken) -> None:\n",
" pass"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```{note}\n",
"The `on_messages` method may be called with an empty list of messages, in which\n",
"case it means the agent was called previously and is now being called again,\n",
"without any new messages from the caller. So it is important to keep a history\n",
"of the previous messages received by the agent, and use that history to generate\n",
"the response.\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can create a {py:class}`~autogen_agentchat.teams.SelectorGroupChat` with 5 instances of `ArithmeticAgent`:\n",
"\n",
"- one that adds 1 to the input integer,\n",
"- one that subtracts 1 from the input integer,\n",
"- one that multiplies the input integer by 2,\n",
"- one that divides the input integer by 2 and rounds down to the nearest integer, and\n",
"- one that returns the input integer unchanged.\n",
"\n",
"We then create a {py:class}`~autogen_agentchat.teams.SelectorGroupChat` with these agents,\n",
"and set the appropriate selector settings:\n",
"\n",
"- allow the same agent to be selected consecutively to allow for repeated operations, and\n",
"- customize the selector prompt to tailor the model's response to the specific task."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"---------- user ----------\n",
"Apply the operations to turn the given number into 25.\n",
"---------- user ----------\n",
"10\n",
"---------- multiply_agent ----------\n",
"20\n",
"---------- add_agent ----------\n",
"21\n",
"---------- multiply_agent ----------\n",
"42\n",
"---------- divide_agent ----------\n",
"21\n",
"---------- add_agent ----------\n",
"22\n",
"---------- add_agent ----------\n",
"23\n",
"---------- add_agent ----------\n",
"24\n",
"---------- add_agent ----------\n",
"25\n",
"---------- Summary ----------\n",
"Number of messages: 10\n",
"Finish reason: Maximum number of messages 10 reached, current message count: 10\n",
"Total prompt tokens: 0\n",
"Total completion tokens: 0\n",
"Duration: 2.40 seconds\n"
]
}
],
"source": [
"async def run_number_agents() -> None:\n",
" # Create agents for number operations.\n",
" add_agent = ArithmeticAgent(\"add_agent\", \"Adds 1 to the number.\", lambda x: x + 1)\n",
" multiply_agent = ArithmeticAgent(\"multiply_agent\", \"Multiplies the number by 2.\", lambda x: x * 2)\n",
" subtract_agent = ArithmeticAgent(\"subtract_agent\", \"Subtracts 1 from the number.\", lambda x: x - 1)\n",
" divide_agent = ArithmeticAgent(\"divide_agent\", \"Divides the number by 2 and rounds down.\", lambda x: x // 2)\n",
" identity_agent = ArithmeticAgent(\"identity_agent\", \"Returns the number as is.\", lambda x: x)\n",
"\n",
" # The termination condition is to stop after 10 messages.\n",
" termination_condition = MaxMessageTermination(10)\n",
"\n",
" # Create a selector group chat.\n",
" selector_group_chat = SelectorGroupChat(\n",
" [add_agent, multiply_agent, subtract_agent, divide_agent, identity_agent],\n",
" model_client=OpenAIChatCompletionClient(model=\"gpt-4o\"),\n",
" termination_condition=termination_condition,\n",
" allow_repeated_speaker=True, # Allow the same agent to speak multiple times, necessary for this task.\n",
" selector_prompt=(\n",
" \"Available roles:\\n{roles}\\nTheir job descriptions:\\n{participants}\\n\"\n",
" \"Current conversation history:\\n{history}\\n\"\n",
" \"Please select the most appropriate role for the next message, and only return the role name.\"\n",
" ),\n",
" )\n",
"\n",
" # Run the selector group chat with a given task and stream the response.\n",
" task: List[ChatMessage] = [\n",
" TextMessage(content=\"Apply the operations to turn the given number into 25.\", source=\"user\"),\n",
" TextMessage(content=\"10\", source=\"user\"),\n",
" ]\n",
" stream = selector_group_chat.run_stream(task=task)\n",
" await Console(stream)\n",
"\n",
"\n",
"# Use asyncio.run(run_number_agents()) when running in a script.\n",
"await run_number_agents()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"From the output, we can see that the agents have successfully transformed the input integer\n",
"from 10 to 25 by choosing appropriate agents that apply the arithmetic operations in sequence."
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -23,7 +23,7 @@
"At a high level, messages in AgentChat can be categorized into two types: agent-agent messages and an agent's internal events and messages.\n",
"\n",
"### Agent-Agent Messages\n",
"AgentChat supports many message types for agent-to-agent communication. The most common one is the {py:class}`~autogen_agentchat.messages.ChatMessage`. This message type allows both text and multimodal communication and subsumes other message types, such as {py:class}`~autogen_agentchat.messages.TextMessage` or {py:class}`~autogen_agentchat.messages.MultiModalMessage`.\n",
"AgentChat supports many message types for agent-to-agent communication. They belong to the union type {py:class}`~autogen_agentchat.messages.ChatMessage`. This message type allows both text and multimodal communication and subsumes other message types, such as {py:class}`~autogen_agentchat.messages.TextMessage` or {py:class}`~autogen_agentchat.messages.MultiModalMessage`.\n",
"\n",
"For example, the following code snippet demonstrates how to create a text message, which accepts a string content and a string source:"
]
@@ -91,13 +91,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Internal Events and Messages\n",
"### Internal Events\n",
"\n",
"AgentChat also supports the concept of `inner_messages` - messages that are internal to an agent. These messages are used to communicate events and information on actions _within_ the agent itself.\n",
"AgentChat also supports the concept of `events` - messages that are internal to an agent. These messages are used to communicate events and information on actions _within_ the agent itself, and belong to the union type {py:class}`~autogen_agentchat.messages.AgentEvent`.\n",
"\n",
"Examples of these include {py:class}`~autogen_agentchat.messages.ToolCallMessage`, which indicates that a request was made to call a tool, and {py:class}`~autogen_agentchat.messages.ToolCallResultMessage`, which contains the results of tool calls.\n",
"Examples of these include {py:class}`~autogen_agentchat.messages.ToolCallRequestEvent`, which indicates that a request was made to call a tool, and {py:class}`~autogen_agentchat.messages.ToolCallExecutionEvent`, which contains the results of tool calls.\n",
"\n",
"Typically, these messages are created by the agent itself and are contained in the {py:attr}`~autogen_agentchat.base.Response.inner_messages` field of the {py:class}`~autogen_agentchat.base.Response` returned from {py:class}`~autogen_agentchat.base.ChatAgent.on_messages`. If you are building a custom agent and have events that you want to communicate to other entities (e.g., a UI), you can include these in the {py:attr}`~autogen_agentchat.base.Response.inner_messages` field of the {py:class}`~autogen_agentchat.base.Response`. We will show examples of this in [Custom Agents](./custom-agents.ipynb).\n",
"Typically, events are created by the agent itself and are contained in the {py:attr}`~autogen_agentchat.base.Response.inner_messages` field of the {py:class}`~autogen_agentchat.base.Response` returned from {py:class}`~autogen_agentchat.base.ChatAgent.on_messages`. If you are building a custom agent and have events that you want to communicate to other entities (e.g., a UI), you can include these in the {py:attr}`~autogen_agentchat.base.Response.inner_messages` field of the {py:class}`~autogen_agentchat.base.Response`. We will show examples of this in [Custom Agents](./custom-agents.ipynb).\n",
"\n",
"\n",
"You can read about the full set of messages supported in AgentChat in the {py:mod}`~autogen_agentchat.messages` module. \n",

File diff suppressed because one or more lines are too long

View File

@@ -1,304 +1,304 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Termination \n",
"\n",
"In the previous section, we explored how to define agents, and organize them into teams that can solve tasks. However, a run can go on forever, and in many cases, we need to know _when_ to stop them. This is the role of the termination condition.\n",
"\n",
"AgentChat supports several termination condition by providing a base {py:class}`~autogen_agentchat.base.TerminationCondition` class and several implementations that inherit from it.\n",
"\n",
"A termination condition is a callable that takes a sequece of {py:class}`~autogen_agentchat.messages.AgentMessage` objects **since the last time the condition was called**, and returns a {py:class}`~autogen_agentchat.messages.StopMessage` if the conversation should be terminated, or `None` otherwise.\n",
"Once a termination condition has been reached, it must be reset by calling {py:meth}`~autogen_agentchat.base.TerminationCondition.reset` before it can be used again.\n",
"\n",
"Some important things to note about termination conditions: \n",
"- They are stateful but reset automatically after each run ({py:meth}`~autogen_agentchat.base.TaskRunner.run` or {py:meth}`~autogen_agentchat.base.TaskRunner.run_stream`) is finished.\n",
"- They can be combined using the AND and OR operators.\n",
"\n",
"```{note}\n",
"For group chat teams (i.e., {py:class}`~autogen_agentchat.teams.RoundRobinGroupChat`,\n",
"{py:class}`~autogen_agentchat.teams.SelectorGroupChat`, and {py:class}`~autogen_agentchat.teams.Swarm`),\n",
"the termination condition is called after each agent responds.\n",
"While a response may contain multiple inner messages, the team calls its termination condition just once for all the messages from a single response.\n",
"So the condition is called with the \"delta sequence\" of messages since the last time it was called.\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Built-In Termination Conditions: \n",
"1. {py:class}`~autogen_agentchat.conditions.MaxMessageTermination`: Stops after a specified number of messages have been produced, including both agent and task messages.\n",
"2. {py:class}`~autogen_agentchat.conditions.TextMentionTermination`: Stops when specific text or string is mentioned in a message (e.g., \"TERMINATE\").\n",
"3. {py:class}`~autogen_agentchat.conditions.TokenUsageTermination`: Stops when a certain number of prompt or completion tokens are used. This requires the agents to report token usage in their messages.\n",
"4. {py:class}`~autogen_agentchat.conditions.TimeoutTermination`: Stops after a specified duration in seconds.\n",
"5. {py:class}`~autogen_agentchat.conditions.HandoffTermination`: Stops when a handoff to a specific target is requested. Handoff messages can be used to build patterns such as {py:class}`~autogen_agentchat.teams.Swarm`. This is useful when you want to pause the run and allow application or user to provide input when an agent hands off to them.\n",
"6. {py:class}`~autogen_agentchat.conditions.SourceMatchTermination`: Stops after a specific agent responds.\n",
"7. {py:class}`~autogen_agentchat.conditions.ExternalTermination`: Enables programmatic control of termination from outside the run. This is useful for UI integration (e.g., \"Stop\" buttons in chat interfaces).\n",
"8. {py:class}`~autogen_agentchat.conditions.StopMessageTermination`: Stops when a {py:class}`~autogen_agentchat.messages.StopMessage` is produced by an agent."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To demonstrate the characteristics of termination conditions, we'll create a team consisting of two agents: a primary agent responsible for text generation and a critic agent that reviews and provides feedback on the generated text."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from autogen_agentchat.agents import AssistantAgent\n",
"from autogen_agentchat.conditions import MaxMessageTermination, TextMentionTermination\n",
"from autogen_agentchat.teams import RoundRobinGroupChat\n",
"from autogen_agentchat.ui import Console\n",
"from autogen_ext.models.openai import OpenAIChatCompletionClient\n",
"\n",
"model_client = OpenAIChatCompletionClient(\n",
" model=\"gpt-4o\",\n",
" temperature=1,\n",
" # api_key=\"sk-...\", # Optional if you have an OPENAI_API_KEY env variable set.\n",
")\n",
"\n",
"# Create the primary agent.\n",
"primary_agent = AssistantAgent(\n",
" \"primary\",\n",
" model_client=model_client,\n",
" system_message=\"You are a helpful AI assistant.\",\n",
")\n",
"\n",
"# Create the critic agent.\n",
"critic_agent = AssistantAgent(\n",
" \"critic\",\n",
" model_client=model_client,\n",
" system_message=\"Provide constructive feedback for every message. Respond with 'APPROVE' to when your feedbacks are addressed.\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's explore how termination conditions automatically reset after each `run` or `run_stream` call, allowing the team to resume its conversation from where it left off."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"---------- user ----------\n",
"Write a unique, Haiku about the weather in Paris\n",
"---------- primary ----------\n",
"Gentle rain whispers, \n",
"Cobblestones glisten softly— \n",
"Paris dreams in gray.\n",
"[Prompt tokens: 30, Completion tokens: 19]\n",
"---------- critic ----------\n",
"The Haiku captures the essence of a rainy day in Paris beautifully, and the imagery is vivid. However, it's important to ensure the use of the traditional 5-7-5 syllable structure for Haikus. Your current Haiku lines are composed of 4-7-5 syllables, which slightly deviates from the form. Consider revising the first line to fit the structure.\n",
"\n",
"For example:\n",
"Soft rain whispers down, \n",
"Cobblestones glisten softly — \n",
"Paris dreams in gray.\n",
"\n",
"This revision maintains the essence of your original lines while adhering to the traditional Haiku structure.\n",
"[Prompt tokens: 70, Completion tokens: 120]\n",
"---------- Summary ----------\n",
"Number of messages: 3\n",
"Finish reason: Maximum number of messages 3 reached, current message count: 3\n",
"Total prompt tokens: 100\n",
"Total completion tokens: 139\n",
"Duration: 3.34 seconds\n"
]
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Termination \n",
"\n",
"In the previous section, we explored how to define agents, and organize them into teams that can solve tasks. However, a run can go on forever, and in many cases, we need to know _when_ to stop them. This is the role of the termination condition.\n",
"\n",
"AgentChat supports several termination condition by providing a base {py:class}`~autogen_agentchat.base.TerminationCondition` class and several implementations that inherit from it.\n",
"\n",
"A termination condition is a callable that takes a sequece of {py:class}`~autogen_agentchat.messages.AgentEvent` or {py:class}`~autogen_agentchat.messages.ChatMessage` objects **since the last time the condition was called**, and returns a {py:class}`~autogen_agentchat.messages.StopMessage` if the conversation should be terminated, or `None` otherwise.\n",
"Once a termination condition has been reached, it must be reset by calling {py:meth}`~autogen_agentchat.base.TerminationCondition.reset` before it can be used again.\n",
"\n",
"Some important things to note about termination conditions: \n",
"- They are stateful but reset automatically after each run ({py:meth}`~autogen_agentchat.base.TaskRunner.run` or {py:meth}`~autogen_agentchat.base.TaskRunner.run_stream`) is finished.\n",
"- They can be combined using the AND and OR operators.\n",
"\n",
"```{note}\n",
"For group chat teams (i.e., {py:class}`~autogen_agentchat.teams.RoundRobinGroupChat`,\n",
"{py:class}`~autogen_agentchat.teams.SelectorGroupChat`, and {py:class}`~autogen_agentchat.teams.Swarm`),\n",
"the termination condition is called after each agent responds.\n",
"While a response may contain multiple inner messages, the team calls its termination condition just once for all the messages from a single response.\n",
"So the condition is called with the \"delta sequence\" of messages since the last time it was called.\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Built-In Termination Conditions: \n",
"1. {py:class}`~autogen_agentchat.conditions.MaxMessageTermination`: Stops after a specified number of messages have been produced, including both agent and task messages.\n",
"2. {py:class}`~autogen_agentchat.conditions.TextMentionTermination`: Stops when specific text or string is mentioned in a message (e.g., \"TERMINATE\").\n",
"3. {py:class}`~autogen_agentchat.conditions.TokenUsageTermination`: Stops when a certain number of prompt or completion tokens are used. This requires the agents to report token usage in their messages.\n",
"4. {py:class}`~autogen_agentchat.conditions.TimeoutTermination`: Stops after a specified duration in seconds.\n",
"5. {py:class}`~autogen_agentchat.conditions.HandoffTermination`: Stops when a handoff to a specific target is requested. Handoff messages can be used to build patterns such as {py:class}`~autogen_agentchat.teams.Swarm`. This is useful when you want to pause the run and allow application or user to provide input when an agent hands off to them.\n",
"6. {py:class}`~autogen_agentchat.conditions.SourceMatchTermination`: Stops after a specific agent responds.\n",
"7. {py:class}`~autogen_agentchat.conditions.ExternalTermination`: Enables programmatic control of termination from outside the run. This is useful for UI integration (e.g., \"Stop\" buttons in chat interfaces).\n",
"8. {py:class}`~autogen_agentchat.conditions.StopMessageTermination`: Stops when a {py:class}`~autogen_agentchat.messages.StopMessage` is produced by an agent."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To demonstrate the characteristics of termination conditions, we'll create a team consisting of two agents: a primary agent responsible for text generation and a critic agent that reviews and provides feedback on the generated text."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from autogen_agentchat.agents import AssistantAgent\n",
"from autogen_agentchat.conditions import MaxMessageTermination, TextMentionTermination\n",
"from autogen_agentchat.teams import RoundRobinGroupChat\n",
"from autogen_agentchat.ui import Console\n",
"from autogen_ext.models.openai import OpenAIChatCompletionClient\n",
"\n",
"model_client = OpenAIChatCompletionClient(\n",
" model=\"gpt-4o\",\n",
" temperature=1,\n",
" # api_key=\"sk-...\", # Optional if you have an OPENAI_API_KEY env variable set.\n",
")\n",
"\n",
"# Create the primary agent.\n",
"primary_agent = AssistantAgent(\n",
" \"primary\",\n",
" model_client=model_client,\n",
" system_message=\"You are a helpful AI assistant.\",\n",
")\n",
"\n",
"# Create the critic agent.\n",
"critic_agent = AssistantAgent(\n",
" \"critic\",\n",
" model_client=model_client,\n",
" system_message=\"Provide constructive feedback for every message. Respond with 'APPROVE' to when your feedbacks are addressed.\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's explore how termination conditions automatically reset after each `run` or `run_stream` call, allowing the team to resume its conversation from where it left off."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"---------- user ----------\n",
"Write a unique, Haiku about the weather in Paris\n",
"---------- primary ----------\n",
"Gentle rain whispers, \n",
"Cobblestones glisten softly— \n",
"Paris dreams in gray.\n",
"[Prompt tokens: 30, Completion tokens: 19]\n",
"---------- critic ----------\n",
"The Haiku captures the essence of a rainy day in Paris beautifully, and the imagery is vivid. However, it's important to ensure the use of the traditional 5-7-5 syllable structure for Haikus. Your current Haiku lines are composed of 4-7-5 syllables, which slightly deviates from the form. Consider revising the first line to fit the structure.\n",
"\n",
"For example:\n",
"Soft rain whispers down, \n",
"Cobblestones glisten softly — \n",
"Paris dreams in gray.\n",
"\n",
"This revision maintains the essence of your original lines while adhering to the traditional Haiku structure.\n",
"[Prompt tokens: 70, Completion tokens: 120]\n",
"---------- Summary ----------\n",
"Number of messages: 3\n",
"Finish reason: Maximum number of messages 3 reached, current message count: 3\n",
"Total prompt tokens: 100\n",
"Total completion tokens: 139\n",
"Duration: 3.34 seconds\n"
]
},
{
"data": {
"text/plain": [
"TaskResult(messages=[TextMessage(source='user', models_usage=None, content='Write a unique, Haiku about the weather in Paris'), TextMessage(source='primary', models_usage=RequestUsage(prompt_tokens=30, completion_tokens=19), content='Gentle rain whispers, \\nCobblestones glisten softly— \\nParis dreams in gray.'), TextMessage(source='critic', models_usage=RequestUsage(prompt_tokens=70, completion_tokens=120), content=\"The Haiku captures the essence of a rainy day in Paris beautifully, and the imagery is vivid. However, it's important to ensure the use of the traditional 5-7-5 syllable structure for Haikus. Your current Haiku lines are composed of 4-7-5 syllables, which slightly deviates from the form. Consider revising the first line to fit the structure.\\n\\nFor example:\\nSoft rain whispers down, \\nCobblestones glisten softly — \\nParis dreams in gray.\\n\\nThis revision maintains the essence of your original lines while adhering to the traditional Haiku structure.\")], stop_reason='Maximum number of messages 3 reached, current message count: 3')"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"max_msg_termination = MaxMessageTermination(max_messages=3)\n",
"round_robin_team = RoundRobinGroupChat([primary_agent, critic_agent], termination_condition=max_msg_termination)\n",
"\n",
"# Use asyncio.run(...) if you are running this script as a standalone script.\n",
"await Console(round_robin_team.run_stream(task=\"Write a unique, Haiku about the weather in Paris\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The conversation stopped after reaching the maximum message limit. Since the primary agent didn't get to respond to the feedback, let's continue the conversation."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"---------- primary ----------\n",
"Thank you for your feedback. Here is the revised Haiku:\n",
"\n",
"Soft rain whispers down, \n",
"Cobblestones glisten softly — \n",
"Paris dreams in gray.\n",
"[Prompt tokens: 181, Completion tokens: 32]\n",
"---------- critic ----------\n",
"The revised Haiku now follows the traditional 5-7-5 syllable pattern, and it still beautifully captures the atmospheric mood of Paris in the rain. The imagery and flow are both clear and evocative. Well done on making the adjustment! \n",
"\n",
"APPROVE\n",
"[Prompt tokens: 234, Completion tokens: 54]\n",
"---------- primary ----------\n",
"Thank you for your kind words and approval. I'm glad the revision meets your expectations and captures the essence of Paris. If you have any more requests or need further assistance, feel free to ask!\n",
"[Prompt tokens: 279, Completion tokens: 39]\n",
"---------- Summary ----------\n",
"Number of messages: 3\n",
"Finish reason: Maximum number of messages 3 reached, current message count: 3\n",
"Total prompt tokens: 694\n",
"Total completion tokens: 125\n",
"Duration: 6.43 seconds\n"
]
},
{
"data": {
"text/plain": [
"TaskResult(messages=[TextMessage(source='primary', models_usage=RequestUsage(prompt_tokens=181, completion_tokens=32), content='Thank you for your feedback. Here is the revised Haiku:\\n\\nSoft rain whispers down, \\nCobblestones glisten softly — \\nParis dreams in gray.'), TextMessage(source='critic', models_usage=RequestUsage(prompt_tokens=234, completion_tokens=54), content='The revised Haiku now follows the traditional 5-7-5 syllable pattern, and it still beautifully captures the atmospheric mood of Paris in the rain. The imagery and flow are both clear and evocative. Well done on making the adjustment! \\n\\nAPPROVE'), TextMessage(source='primary', models_usage=RequestUsage(prompt_tokens=279, completion_tokens=39), content=\"Thank you for your kind words and approval. I'm glad the revision meets your expectations and captures the essence of Paris. If you have any more requests or need further assistance, feel free to ask!\")], stop_reason='Maximum number of messages 3 reached, current message count: 3')"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Use asyncio.run(...) if you are running this script as a standalone script.\n",
"await Console(round_robin_team.run_stream())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The team continued from where it left off, allowing the primary agent to respond to the feedback."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, let's show how termination conditions can be combined using the AND (`&`) and OR (`|`) operators to create more complex termination logic. For example, we'll create a team that stops either after 10 messages are generated or when the critic agent approves a message.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"---------- user ----------\n",
"Write a unique, Haiku about the weather in Paris\n",
"---------- primary ----------\n",
"Spring breeze gently hums, \n",
"Cherry blossoms in full bloom— \n",
"Paris wakes to life.\n",
"[Prompt tokens: 467, Completion tokens: 19]\n",
"---------- critic ----------\n",
"The Haiku beautifully captures the awakening of Paris in the spring. The imagery of a gentle spring breeze and cherry blossoms in full bloom effectively conveys the rejuvenating feel of the season. The final line, \"Paris wakes to life,\" encapsulates the renewed energy and vibrancy of the city. The Haiku adheres to the 5-7-5 syllable structure and portrays a vivid seasonal transformation in a concise and poetic manner. Excellent work!\n",
"\n",
"APPROVE\n",
"[Prompt tokens: 746, Completion tokens: 93]\n",
"---------- Summary ----------\n",
"Number of messages: 3\n",
"Finish reason: Text 'APPROVE' mentioned\n",
"Total prompt tokens: 1213\n",
"Total completion tokens: 112\n",
"Duration: 2.75 seconds\n"
]
},
{
"data": {
"text/plain": [
"TaskResult(messages=[TextMessage(source='user', models_usage=None, content='Write a unique, Haiku about the weather in Paris'), TextMessage(source='primary', models_usage=RequestUsage(prompt_tokens=467, completion_tokens=19), content='Spring breeze gently hums, \\nCherry blossoms in full bloom— \\nParis wakes to life.'), TextMessage(source='critic', models_usage=RequestUsage(prompt_tokens=746, completion_tokens=93), content='The Haiku beautifully captures the awakening of Paris in the spring. The imagery of a gentle spring breeze and cherry blossoms in full bloom effectively conveys the rejuvenating feel of the season. The final line, \"Paris wakes to life,\" encapsulates the renewed energy and vibrancy of the city. The Haiku adheres to the 5-7-5 syllable structure and portrays a vivid seasonal transformation in a concise and poetic manner. Excellent work!\\n\\nAPPROVE')], stop_reason=\"Text 'APPROVE' mentioned\")"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"max_msg_termination = MaxMessageTermination(max_messages=10)\n",
"text_termination = TextMentionTermination(\"APPROVE\")\n",
"combined_termination = max_msg_termination | text_termination\n",
"\n",
"round_robin_team = RoundRobinGroupChat([primary_agent, critic_agent], termination_condition=combined_termination)\n",
"\n",
"# Use asyncio.run(...) if you are running this script as a standalone script.\n",
"await Console(round_robin_team.run_stream(task=\"Write a unique, Haiku about the weather in Paris\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The conversation stopped after the critic agent approved the message, although it could have also stopped if 10 messages were generated.\n",
"\n",
"Alternatively, if we want to stop the run only when both conditions are met, we can use the AND (`&`) operator."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"combined_termination = max_msg_termination & text_termination"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
{
"data": {
"text/plain": [
"TaskResult(messages=[TextMessage(source='user', models_usage=None, content='Write a unique, Haiku about the weather in Paris'), TextMessage(source='primary', models_usage=RequestUsage(prompt_tokens=30, completion_tokens=19), content='Gentle rain whispers, \\nCobblestones glisten softly— \\nParis dreams in gray.'), TextMessage(source='critic', models_usage=RequestUsage(prompt_tokens=70, completion_tokens=120), content=\"The Haiku captures the essence of a rainy day in Paris beautifully, and the imagery is vivid. However, it's important to ensure the use of the traditional 5-7-5 syllable structure for Haikus. Your current Haiku lines are composed of 4-7-5 syllables, which slightly deviates from the form. Consider revising the first line to fit the structure.\\n\\nFor example:\\nSoft rain whispers down, \\nCobblestones glisten softly — \\nParis dreams in gray.\\n\\nThis revision maintains the essence of your original lines while adhering to the traditional Haiku structure.\")], stop_reason='Maximum number of messages 3 reached, current message count: 3')"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"max_msg_termination = MaxMessageTermination(max_messages=3)\n",
"round_robin_team = RoundRobinGroupChat([primary_agent, critic_agent], termination_condition=max_msg_termination)\n",
"\n",
"# Use asyncio.run(...) if you are running this script as a standalone script.\n",
"await Console(round_robin_team.run_stream(task=\"Write a unique, Haiku about the weather in Paris\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The conversation stopped after reaching the maximum message limit. Since the primary agent didn't get to respond to the feedback, let's continue the conversation."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"---------- primary ----------\n",
"Thank you for your feedback. Here is the revised Haiku:\n",
"\n",
"Soft rain whispers down, \n",
"Cobblestones glisten softly — \n",
"Paris dreams in gray.\n",
"[Prompt tokens: 181, Completion tokens: 32]\n",
"---------- critic ----------\n",
"The revised Haiku now follows the traditional 5-7-5 syllable pattern, and it still beautifully captures the atmospheric mood of Paris in the rain. The imagery and flow are both clear and evocative. Well done on making the adjustment! \n",
"\n",
"APPROVE\n",
"[Prompt tokens: 234, Completion tokens: 54]\n",
"---------- primary ----------\n",
"Thank you for your kind words and approval. I'm glad the revision meets your expectations and captures the essence of Paris. If you have any more requests or need further assistance, feel free to ask!\n",
"[Prompt tokens: 279, Completion tokens: 39]\n",
"---------- Summary ----------\n",
"Number of messages: 3\n",
"Finish reason: Maximum number of messages 3 reached, current message count: 3\n",
"Total prompt tokens: 694\n",
"Total completion tokens: 125\n",
"Duration: 6.43 seconds\n"
]
},
{
"data": {
"text/plain": [
"TaskResult(messages=[TextMessage(source='primary', models_usage=RequestUsage(prompt_tokens=181, completion_tokens=32), content='Thank you for your feedback. Here is the revised Haiku:\\n\\nSoft rain whispers down, \\nCobblestones glisten softly — \\nParis dreams in gray.'), TextMessage(source='critic', models_usage=RequestUsage(prompt_tokens=234, completion_tokens=54), content='The revised Haiku now follows the traditional 5-7-5 syllable pattern, and it still beautifully captures the atmospheric mood of Paris in the rain. The imagery and flow are both clear and evocative. Well done on making the adjustment! \\n\\nAPPROVE'), TextMessage(source='primary', models_usage=RequestUsage(prompt_tokens=279, completion_tokens=39), content=\"Thank you for your kind words and approval. I'm glad the revision meets your expectations and captures the essence of Paris. If you have any more requests or need further assistance, feel free to ask!\")], stop_reason='Maximum number of messages 3 reached, current message count: 3')"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Use asyncio.run(...) if you are running this script as a standalone script.\n",
"await Console(round_robin_team.run_stream())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The team continued from where it left off, allowing the primary agent to respond to the feedback."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, let's show how termination conditions can be combined using the AND (`&`) and OR (`|`) operators to create more complex termination logic. For example, we'll create a team that stops either after 10 messages are generated or when the critic agent approves a message.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"---------- user ----------\n",
"Write a unique, Haiku about the weather in Paris\n",
"---------- primary ----------\n",
"Spring breeze gently hums, \n",
"Cherry blossoms in full bloom— \n",
"Paris wakes to life.\n",
"[Prompt tokens: 467, Completion tokens: 19]\n",
"---------- critic ----------\n",
"The Haiku beautifully captures the awakening of Paris in the spring. The imagery of a gentle spring breeze and cherry blossoms in full bloom effectively conveys the rejuvenating feel of the season. The final line, \"Paris wakes to life,\" encapsulates the renewed energy and vibrancy of the city. The Haiku adheres to the 5-7-5 syllable structure and portrays a vivid seasonal transformation in a concise and poetic manner. Excellent work!\n",
"\n",
"APPROVE\n",
"[Prompt tokens: 746, Completion tokens: 93]\n",
"---------- Summary ----------\n",
"Number of messages: 3\n",
"Finish reason: Text 'APPROVE' mentioned\n",
"Total prompt tokens: 1213\n",
"Total completion tokens: 112\n",
"Duration: 2.75 seconds\n"
]
},
{
"data": {
"text/plain": [
"TaskResult(messages=[TextMessage(source='user', models_usage=None, content='Write a unique, Haiku about the weather in Paris'), TextMessage(source='primary', models_usage=RequestUsage(prompt_tokens=467, completion_tokens=19), content='Spring breeze gently hums, \\nCherry blossoms in full bloom— \\nParis wakes to life.'), TextMessage(source='critic', models_usage=RequestUsage(prompt_tokens=746, completion_tokens=93), content='The Haiku beautifully captures the awakening of Paris in the spring. The imagery of a gentle spring breeze and cherry blossoms in full bloom effectively conveys the rejuvenating feel of the season. The final line, \"Paris wakes to life,\" encapsulates the renewed energy and vibrancy of the city. The Haiku adheres to the 5-7-5 syllable structure and portrays a vivid seasonal transformation in a concise and poetic manner. Excellent work!\\n\\nAPPROVE')], stop_reason=\"Text 'APPROVE' mentioned\")"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"max_msg_termination = MaxMessageTermination(max_messages=10)\n",
"text_termination = TextMentionTermination(\"APPROVE\")\n",
"combined_termination = max_msg_termination | text_termination\n",
"\n",
"round_robin_team = RoundRobinGroupChat([primary_agent, critic_agent], termination_condition=combined_termination)\n",
"\n",
"# Use asyncio.run(...) if you are running this script as a standalone script.\n",
"await Console(round_robin_team.run_stream(task=\"Write a unique, Haiku about the weather in Paris\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The conversation stopped after the critic agent approved the message, although it could have also stopped if 10 messages were generated.\n",
"\n",
"Alternatively, if we want to stop the run only when both conditions are met, we can use the AND (`&`) operator."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"combined_termination = max_msg_termination & text_termination"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat": 4,
"nbformat_minor": 2
}