Remove autogen_agentchat.tasks, create autogen_agentchat.ui and autogen_agentchat.conditions (#4512)

This commit is contained in:
Eric Zhu
2024-12-03 15:24:25 -08:00
committed by GitHub
parent b62f8f63dc
commit 32aa452af8
31 changed files with 661 additions and 558 deletions

View File

@@ -23,7 +23,7 @@
"outputs": [],
"source": [
"from autogen_agentchat.agents import CodingAssistantAgent, ToolUseAssistantAgent\n",
"from autogen_agentchat.task import TextMentionTermination\n",
"from autogen_agentchat.conditions import TextMentionTermination\n",
"from autogen_agentchat.teams import RoundRobinGroupChat\n",
"from autogen_core.components.tools import FunctionTool\n",
"from autogen_ext.models import OpenAIChatCompletionClient"

View File

@@ -23,7 +23,7 @@
"outputs": [],
"source": [
"from autogen_agentchat.agents import CodingAssistantAgent, ToolUseAssistantAgent\n",
"from autogen_agentchat.task import TextMentionTermination\n",
"from autogen_agentchat.conditions import TextMentionTermination\n",
"from autogen_agentchat.teams import RoundRobinGroupChat\n",
"from autogen_core.components.tools import FunctionTool\n",
"from autogen_ext.models import OpenAIChatCompletionClient"

View File

@@ -18,7 +18,7 @@
"outputs": [],
"source": [
"from autogen_agentchat.agents import CodingAssistantAgent\n",
"from autogen_agentchat.task import TextMentionTermination\n",
"from autogen_agentchat.conditions import TextMentionTermination\n",
"from autogen_agentchat.teams import RoundRobinGroupChat\n",
"from autogen_ext.models import OpenAIChatCompletionClient"
]

View File

@@ -1,160 +1,161 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Quickstart"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```{include} warning.md\n",
"\n",
"```\n",
"\n",
":::{note}\n",
"For installation instructions, please refer to the [installation guide](./installation).\n",
":::\n",
"\n",
"In AutoGen AgentChat, you can build applications quickly using preset agents.\n",
"To illustrate this, we will begin with creating a team of a single agent\n",
"that can use tools and respond to messages.\n",
"\n",
"The following code uses the OpenAI model. If you haven't already, you need to\n",
"install the following package and extension:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"pip install 'autogen-agentchat==0.4.0.dev8' 'autogen-ext[openai]==0.4.0.dev8'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use Azure OpenAI models and AAD authentication,\n",
"you can follow the instructions [here](./tutorial/models.ipynb#azure-openai)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"---------- user ----------\n",
"What is the weather in New York?\n",
"---------- weather_agent ----------\n",
"[FunctionCall(id='call_AhTZ2q3TNL8x0qs00e3wIZ7y', arguments='{\"city\":\"New York\"}', name='get_weather')]\n",
"[Prompt tokens: 79, Completion tokens: 15]\n",
"---------- weather_agent ----------\n",
"[FunctionExecutionResult(content='The weather in New York is 73 degrees and Sunny.', call_id='call_AhTZ2q3TNL8x0qs00e3wIZ7y')]\n",
"---------- weather_agent ----------\n",
"The weather in New York is currently 73 degrees and sunny.\n",
"[Prompt tokens: 90, Completion tokens: 14]\n",
"---------- weather_agent ----------\n",
"TERMINATE\n",
"[Prompt tokens: 137, Completion tokens: 4]\n",
"---------- Summary ----------\n",
"Number of messages: 5\n",
"Finish reason: Text 'TERMINATE' mentioned\n",
"Total prompt tokens: 306\n",
"Total completion tokens: 33\n",
"Duration: 1.43 seconds\n"
]
}
],
"source": [
"from autogen_agentchat.agents import AssistantAgent\n",
"from autogen_agentchat.task import Console, TextMentionTermination\n",
"from autogen_agentchat.teams import RoundRobinGroupChat\n",
"from autogen_ext.models import OpenAIChatCompletionClient\n",
"\n",
"\n",
"# Define a tool\n",
"async def get_weather(city: str) -> str:\n",
" return f\"The weather in {city} is 73 degrees and Sunny.\"\n",
"\n",
"\n",
"async def main() -> None:\n",
" # Define an agent\n",
" weather_agent = AssistantAgent(\n",
" name=\"weather_agent\",\n",
" model_client=OpenAIChatCompletionClient(\n",
" model=\"gpt-4o-2024-08-06\",\n",
" # api_key=\"YOUR_API_KEY\",\n",
" ),\n",
" tools=[get_weather],\n",
" )\n",
"\n",
" # Define termination condition\n",
" termination = TextMentionTermination(\"TERMINATE\")\n",
"\n",
" # Define a team\n",
" agent_team = RoundRobinGroupChat([weather_agent], termination_condition=termination)\n",
"\n",
" # Run the team and stream messages to the console\n",
" stream = agent_team.run_stream(task=\"What is the weather in New York?\")\n",
" await Console(stream)\n",
"\n",
"\n",
"# NOTE: if running this inside a Python script you'll need to use asyncio.run(main()).\n",
"await main()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The code snippet above introduces two high level concepts in AgentChat: *Agent* and *Team*. An Agent helps us define what actions are taken when a message is received. Specifically, we use the {py:class}`~autogen_agentchat.agents.AssistantAgent` preset - an agent that can be given access to a model (e.g., LLM) and tools (functions) that it can then use to address tasks. A Team helps us define the rules for how agents interact with each other. In the {py:class}`~autogen_agentchat.teams.RoundRobinGroupChat` team, agents respond in a sequential round-robin fashion.\n",
"In this case, we have a single agent, so the same agent is used for each round."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## What's Next?\n",
"\n",
"Now that you have a basic understanding of how to define an agent and a team, consider following the [tutorial](./tutorial/index) for a walkthrough on other features of AgentChat.\n",
"\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Quickstart"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```{include} warning.md\n",
"\n",
"```\n",
"\n",
":::{note}\n",
"For installation instructions, please refer to the [installation guide](./installation).\n",
":::\n",
"\n",
"In AutoGen AgentChat, you can build applications quickly using preset agents.\n",
"To illustrate this, we will begin with creating a team of a single agent\n",
"that can use tools and respond to messages.\n",
"\n",
"The following code uses the OpenAI model. If you haven't already, you need to\n",
"install the following package and extension:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"pip install 'autogen-agentchat==0.4.0.dev8' 'autogen-ext[openai]==0.4.0.dev8'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use Azure OpenAI models and AAD authentication,\n",
"you can follow the instructions [here](./tutorial/models.ipynb#azure-openai)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"---------- user ----------\n",
"What is the weather in New York?\n",
"---------- weather_agent ----------\n",
"[FunctionCall(id='call_AhTZ2q3TNL8x0qs00e3wIZ7y', arguments='{\"city\":\"New York\"}', name='get_weather')]\n",
"[Prompt tokens: 79, Completion tokens: 15]\n",
"---------- weather_agent ----------\n",
"[FunctionExecutionResult(content='The weather in New York is 73 degrees and Sunny.', call_id='call_AhTZ2q3TNL8x0qs00e3wIZ7y')]\n",
"---------- weather_agent ----------\n",
"The weather in New York is currently 73 degrees and sunny.\n",
"[Prompt tokens: 90, Completion tokens: 14]\n",
"---------- weather_agent ----------\n",
"TERMINATE\n",
"[Prompt tokens: 137, Completion tokens: 4]\n",
"---------- Summary ----------\n",
"Number of messages: 5\n",
"Finish reason: Text 'TERMINATE' mentioned\n",
"Total prompt tokens: 306\n",
"Total completion tokens: 33\n",
"Duration: 1.43 seconds\n"
]
}
],
"source": [
"from autogen_agentchat.agents import AssistantAgent\n",
"from autogen_agentchat.conditions import TextMentionTermination\n",
"from autogen_agentchat.teams import RoundRobinGroupChat\n",
"from autogen_agentchat.ui import Console\n",
"from autogen_ext.models import OpenAIChatCompletionClient\n",
"\n",
"\n",
"# Define a tool\n",
"async def get_weather(city: str) -> str:\n",
" return f\"The weather in {city} is 73 degrees and Sunny.\"\n",
"\n",
"\n",
"async def main() -> None:\n",
" # Define an agent\n",
" weather_agent = AssistantAgent(\n",
" name=\"weather_agent\",\n",
" model_client=OpenAIChatCompletionClient(\n",
" model=\"gpt-4o-2024-08-06\",\n",
" # api_key=\"YOUR_API_KEY\",\n",
" ),\n",
" tools=[get_weather],\n",
" )\n",
"\n",
" # Define termination condition\n",
" termination = TextMentionTermination(\"TERMINATE\")\n",
"\n",
" # Define a team\n",
" agent_team = RoundRobinGroupChat([weather_agent], termination_condition=termination)\n",
"\n",
" # Run the team and stream messages to the console\n",
" stream = agent_team.run_stream(task=\"What is the weather in New York?\")\n",
" await Console(stream)\n",
"\n",
"\n",
"# NOTE: if running this inside a Python script you'll need to use asyncio.run(main()).\n",
"await main()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The code snippet above introduces two high level concepts in AgentChat: *Agent* and *Team*. An Agent helps us define what actions are taken when a message is received. Specifically, we use the {py:class}`~autogen_agentchat.agents.AssistantAgent` preset - an agent that can be given access to a model (e.g., LLM) and tools (functions) that it can then use to address tasks. A Team helps us define the rules for how agents interact with each other. In the {py:class}`~autogen_agentchat.teams.RoundRobinGroupChat` team, agents respond in a sequential round-robin fashion.\n",
"In this case, we have a single agent, so the same agent is used for each round."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## What's Next?\n",
"\n",
"Now that you have a basic understanding of how to define an agent and a team, consider following the [tutorial](./tutorial/index) for a walkthrough on other features of AgentChat.\n",
"\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.6"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -108,13 +108,13 @@
"\n",
"We can also stream each message as it is generated by the agent by using the\n",
"{py:meth}`~autogen_agentchat.agents.AssistantAgent.on_messages_stream` method,\n",
"and use {py:class}`~autogen_agentchat.task.Console` to print the messages\n",
"and use {py:class}`~autogen_agentchat.ui.Console` to print the messages\n",
"as they appear to the console."
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {},
"outputs": [
{
@@ -138,7 +138,7 @@
}
],
"source": [
"from autogen_agentchat.task import Console\n",
"from autogen_agentchat.ui import Console\n",
"\n",
"\n",
"async def assistant_run_stream() -> None:\n",

View File

@@ -56,16 +56,17 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from typing import Sequence\n",
"\n",
"from autogen_agentchat.agents import AssistantAgent\n",
"from autogen_agentchat.conditions import MaxMessageTermination, TextMentionTermination\n",
"from autogen_agentchat.messages import AgentMessage\n",
"from autogen_agentchat.task import Console, MaxMessageTermination, TextMentionTermination\n",
"from autogen_agentchat.teams import SelectorGroupChat\n",
"from autogen_agentchat.ui import Console\n",
"from autogen_ext.models import OpenAIChatCompletionClient"
]
},
@@ -208,8 +209,8 @@
"metadata": {},
"source": [
"Let's create the team with two termination conditions:\n",
"{py:class}`~autogen_agentchat.task.TextMentionTermination` to end the conversation when the Planning Agent sends \"TERMINATE\",\n",
"and {py:class}`~autogen_agentchat.task.MaxMessageTermination` to limit the conversation to 25 messages to avoid infinite loop."
"{py:class}`~autogen_agentchat.conditions.TextMentionTermination` to end the conversation when the Planning Agent sends \"TERMINATE\",\n",
"and {py:class}`~autogen_agentchat.conditions.MaxMessageTermination` to limit the conversation to 25 messages to avoid infinite loop."
]
},
{
@@ -458,7 +459,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
"version": "3.12.6"
}
},
"nbformat": 4,

View File

@@ -89,16 +89,17 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from typing import Any, Dict, List\n",
"\n",
"from autogen_agentchat.agents import AssistantAgent\n",
"from autogen_agentchat.conditions import HandoffTermination, TextMentionTermination\n",
"from autogen_agentchat.messages import HandoffMessage\n",
"from autogen_agentchat.task import Console, HandoffTermination, TextMentionTermination\n",
"from autogen_agentchat.teams import Swarm\n",
"from autogen_agentchat.ui import Console\n",
"from autogen_ext.models import OpenAIChatCompletionClient"
]
},
@@ -533,7 +534,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "autogen",
"display_name": ".venv",
"language": "python",
"name": "python3"
},
@@ -547,7 +548,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
"version": "3.12.6"
}
},
"nbformat": 4,

View File

@@ -46,18 +46,18 @@
"On its turn, each agent broadcasts its response to all other agents in the team, so all agents have the same context.\n",
"\n",
"We will start by creating a team with a single {py:class}`~autogen_agentchat.agents.AssistantAgent` agent\n",
"and {py:class}`~autogen_agentchat.task.TextMentionTermination`\n",
"and {py:class}`~autogen_agentchat.conditions.TextMentionTermination`\n",
"termination condition that stops the team when a word is detected."
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from autogen_agentchat.agents import AssistantAgent\n",
"from autogen_agentchat.task import TextMentionTermination\n",
"from autogen_agentchat.conditions import TextMentionTermination\n",
"from autogen_agentchat.teams import RoundRobinGroupChat\n",
"from autogen_ext.models import OpenAIChatCompletionClient\n",
"\n",
@@ -212,7 +212,7 @@
"source": [
"As the above example shows, you can obtain the reason why the team stopped by checking the {py:attr}`~autogen_agentchat.base.TaskResult.stop_reason` attribute.\n",
"\n",
"There is a covenient method {py:meth}`~autogen_agentchat.task.Console` that prints the messages to the console\n",
"There is a covenient method {py:meth}`~autogen_agentchat.ui.Console` that prints the messages to the console\n",
"with proper formatting."
]
},
@@ -248,7 +248,7 @@
}
],
"source": [
"from autogen_agentchat.task import Console\n",
"from autogen_agentchat.ui import Console\n",
"\n",
"# Use `asyncio.run(single_agent_team.reset())` when running in a script.\n",
"await single_agent_team.reset() # Reset the team for the next run.\n",
@@ -272,20 +272,21 @@
"\n",
"In this example, we will use the {py:class}`~autogen_agentchat.agents.AssistantAgent` agent class\n",
"for both the primary and critic agents.\n",
"We will use both the {py:class}`~autogen_agentchat.task.TextMentionTermination`\n",
"and {py:class}`~autogen_agentchat.task.MaxMessageTermination` termination conditions\n",
"We will use both the {py:class}`~autogen_agentchat.conditions.TextMentionTermination`\n",
"and {py:class}`~autogen_agentchat.conditions.MaxMessageTermination` termination conditions\n",
"together to stop the team."
]
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from autogen_agentchat.agents import AssistantAgent\n",
"from autogen_agentchat.task import Console, MaxMessageTermination, TextMentionTermination\n",
"from autogen_agentchat.conditions import MaxMessageTermination, TextMentionTermination\n",
"from autogen_agentchat.teams import RoundRobinGroupChat\n",
"from autogen_agentchat.ui import Console\n",
"from autogen_ext.models import OpenAIChatCompletionClient\n",
"\n",
"# Create an OpenAI model client.\n",
@@ -605,7 +606,7 @@
"to continue processing the task. We will show two possible ways to do it:\n",
"\n",
"- Set the maximum number of turns such that the team stops after the specified number of turns.\n",
"- Use the {py:class}`~autogen_agentchat.task.HandoffTermination` termination condition.\n",
"- Use the {py:class}`~autogen_agentchat.conditions.HandoffTermination` termination condition.\n",
"\n",
"You can also use custom termination conditions, see [Termination Conditions](./termination.ipynb)."
]
@@ -641,7 +642,7 @@
"source": [
"#### Using Handoff to Pause Team\n",
"\n",
"You can use the {py:class}`~autogen_agentchat.task.HandoffTermination` termination condition\n",
"You can use the {py:class}`~autogen_agentchat.conditions.HandoffTermination` termination condition\n",
"to stop the team when an agent sends a {py:class}`~autogen_agentchat.messages.HandoffMessage` message.\n",
"\n",
"Let's create a team with a single {py:class}`~autogen_agentchat.agents.AssistantAgent` agent\n",
@@ -661,7 +662,7 @@
"source": [
"from autogen_agentchat.agents import AssistantAgent\n",
"from autogen_agentchat.base import Handoff\n",
"from autogen_agentchat.task import HandoffTermination, TextMentionTermination\n",
"from autogen_agentchat.conditions import HandoffTermination, TextMentionTermination\n",
"from autogen_agentchat.teams import RoundRobinGroupChat\n",
"from autogen_ext.models import OpenAIChatCompletionClient\n",
"\n",
@@ -698,7 +699,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": null,
"metadata": {},
"outputs": [
{
@@ -724,7 +725,7 @@
}
],
"source": [
"from autogen_agentchat.task import Console\n",
"from autogen_agentchat.ui import Console\n",
"\n",
"# Use `asyncio.run(Console(lazy_agent_team.run_stream(task=\"What is the weather in New York?\")))` when running in a script.\n",
"await Console(lazy_agent_team.run_stream(task=\"What is the weather in New York?\"))"
@@ -791,4 +792,4 @@
},
"nbformat": 4,
"nbformat_minor": 2
}
}

View File

@@ -31,14 +31,14 @@
"metadata": {},
"source": [
"# AutoGen provides several built-in termination conditions: \n",
"1. {py:class}`~autogen_agentchat.task.MaxMessageTermination`: Stops after a specified number of messages have been produced, including both agent and task messages.\n",
"2. {py:class}`~autogen_agentchat.task.TextMentionTermination`: Stops when specific text or string is mentioned in a message (e.g., \"TERMINATE\").\n",
"3. {py:class}`~autogen_agentchat.task.TokenUsageTermination`: Stops when a certain number of prompt or completion tokens are used. This requires the agents to report token usage in their messages.\n",
"4. {py:class}`~autogen_agentchat.task.TimeoutTermination`: Stops after a specified duration in seconds.\n",
"5. {py:class}`~autogen_agentchat.task.HandoffTermination`: Stops when a handoff to a specific target is requested. Handoff messages can be used to build patterns such as {py:class}`~autogen_agentchat.teams.Swarm`. This is useful when you want to pause the run and allow application or user to provide input when an agent hands off to them.\n",
"6. {py:class}`~autogen_agentchat.task.SourceMatchTermination`: Stops after a specific agent responds.\n",
"7. {py:class}`~autogen_agentchat.task.ExternalTermination`: Enables programmatic control of termination from outside the run. This is useful for UI integration (e.g., \"Stop\" buttons in chat interfaces).\n",
"8. {py:class}`~autogen_agentchat.task.StopMessageTermination`: Stops when a {py:class}`~autogen_agentchat.messages.StopMessage` is produced by an agent."
"1. {py:class}`~autogen_agentchat.conditions.MaxMessageTermination`: Stops after a specified number of messages have been produced, including both agent and task messages.\n",
"2. {py:class}`~autogen_agentchat.conditions.TextMentionTermination`: Stops when specific text or string is mentioned in a message (e.g., \"TERMINATE\").\n",
"3. {py:class}`~autogen_agentchat.conditions.TokenUsageTermination`: Stops when a certain number of prompt or completion tokens are used. This requires the agents to report token usage in their messages.\n",
"4. {py:class}`~autogen_agentchat.conditions.TimeoutTermination`: Stops after a specified duration in seconds.\n",
"5. {py:class}`~autogen_agentchat.conditions.HandoffTermination`: Stops when a handoff to a specific target is requested. Handoff messages can be used to build patterns such as {py:class}`~autogen_agentchat.teams.Swarm`. This is useful when you want to pause the run and allow application or user to provide input when an agent hands off to them.\n",
"6. {py:class}`~autogen_agentchat.conditions.SourceMatchTermination`: Stops after a specific agent responds.\n",
"7. {py:class}`~autogen_agentchat.conditions.ExternalTermination`: Enables programmatic control of termination from outside the run. This is useful for UI integration (e.g., \"Stop\" buttons in chat interfaces).\n",
"8. {py:class}`~autogen_agentchat.conditions.StopMessageTermination`: Stops when a {py:class}`~autogen_agentchat.messages.StopMessage` is produced by an agent."
]
},
{
@@ -50,13 +50,14 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from autogen_agentchat.agents import AssistantAgent\n",
"from autogen_agentchat.task import Console, MaxMessageTermination, TextMentionTermination\n",
"from autogen_agentchat.conditions import MaxMessageTermination, TextMentionTermination\n",
"from autogen_agentchat.teams import RoundRobinGroupChat\n",
"from autogen_agentchat.ui import Console\n",
"from autogen_ext.models import OpenAIChatCompletionClient\n",
"\n",
"model_client = OpenAIChatCompletionClient(\n",
@@ -295,7 +296,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
"version": "3.12.6"
}
},
"nbformat": 4,