Load and Save state in AgentChat (#4436)

1. convert dataclass types to pydantic basemodel 
2. add save_state and load_state for ChatAgent
3. state types for AgentChat
---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
This commit is contained in:
Victor Dibia
2024-12-04 16:14:41 -08:00
committed by GitHub
parent fef06fdc8a
commit 777f2abbd7
39 changed files with 3684 additions and 2964 deletions

View File

@@ -48,6 +48,12 @@ A dynamic team that uses handoffs to pass tasks between agents.
How to build custom agents.
:::
:::{grid-item-card} {fas}`users;pst-color-primary` State Management
:link: ./state.html
How to manage state in agents and teams.
:::
::::
```{toctree}
@@ -61,4 +67,5 @@ selector-group-chat
swarm
termination
custom-agents
state
```

View File

@@ -0,0 +1,299 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Managing State \n",
"\n",
"So far, we have discussed how to build components in a multi-agent application - agents, teams, termination conditions. In many cases, it is useful to save the state of these components to disk and load them back later. This is particularly useful in a web application where stateless endpoints respond to requests and need to load the state of the application from persistent storage.\n",
"\n",
"In this notebook, we will discuss how to save and load the state of agents, teams, and termination conditions. \n",
" \n",
"\n",
"## Saving and Loading Agents\n",
"\n",
"We can get the state of an agent by calling {py:meth}`~autogen_agentchat.agents.AssistantAgent.save_state` method on \n",
"an {py:class}`~autogen_agentchat.agents.AssistantAgent`. "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"In Tanganyika's depths so wide and deep, \n",
"Ancient secrets in still waters sleep, \n",
"Ripples tell tales that time longs to keep. \n"
]
}
],
"source": [
"from autogen_agentchat.agents import AssistantAgent\n",
"from autogen_agentchat.conditions import MaxMessageTermination\n",
"from autogen_agentchat.messages import TextMessage\n",
"from autogen_agentchat.teams import RoundRobinGroupChat\n",
"from autogen_agentchat.ui import Console\n",
"from autogen_core import CancellationToken\n",
"from autogen_ext.models import OpenAIChatCompletionClient\n",
"\n",
"assistant_agent = AssistantAgent(\n",
" name=\"assistant_agent\",\n",
" system_message=\"You are a helpful assistant\",\n",
" model_client=OpenAIChatCompletionClient(\n",
" model=\"gpt-4o-2024-08-06\",\n",
" # api_key=\"YOUR_API_KEY\",\n",
" ),\n",
")\n",
"\n",
"# Use asyncio.run(...) when running in a script.\n",
"response = await assistant_agent.on_messages(\n",
" [TextMessage(content=\"Write a 3 line poem on lake tangayika\", source=\"user\")], CancellationToken()\n",
")\n",
"print(response.chat_message.content)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'type': 'AssistantAgentState', 'version': '1.0.0', 'llm_messages': [{'content': 'Write a 3 line poem on lake tangayika', 'source': 'user', 'type': 'UserMessage'}, {'content': \"In Tanganyika's depths so wide and deep, \\nAncient secrets in still waters sleep, \\nRipples tell tales that time longs to keep. \", 'source': 'assistant_agent', 'type': 'AssistantMessage'}]}\n"
]
}
],
"source": [
"agent_state = await assistant_agent.save_state()\n",
"print(agent_state)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The last line of the poem I wrote was: \n",
"\"Ripples tell tales that time longs to keep.\"\n"
]
}
],
"source": [
"new_assistant_agent = AssistantAgent(\n",
" name=\"assistant_agent\",\n",
" system_message=\"You are a helpful assistant\",\n",
" model_client=OpenAIChatCompletionClient(\n",
" model=\"gpt-4o-2024-08-06\",\n",
" ),\n",
")\n",
"await new_assistant_agent.load_state(agent_state)\n",
"\n",
"# Use asyncio.run(...) when running in a script.\n",
"response = await new_assistant_agent.on_messages(\n",
" [TextMessage(content=\"What was the last line of the previous poem you wrote\", source=\"user\")], CancellationToken()\n",
")\n",
"print(response.chat_message.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```{note}\n",
"For {py:class}`~autogen_agentchat.agents.AssistantAgent`, its state consists of the model_context.\n",
"If your write your own custom agent, consider overriding the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.save_state` and {py:meth}`~autogen_agentchat.agents.BaseChatAgent.load_state` methods to customize the behavior. The default implementations save and load an empty state.\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Saving and Loading Teams \n",
"\n",
"We can get the state of a team by calling `save_state` method on the team and load it back by calling `load_state` method on the team. \n",
"\n",
"When we call `save_state` on a team, it saves the state of all the agents in the team.\n",
"\n",
"We will begin by creating a simple {py:class}`~autogen_agentchat.teams.RoundRobinGroupChat` team with a single agent and ask it to write a poem. "
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"---------- user ----------\n",
"Write a beautiful poem 3-line about lake tangayika\n",
"---------- assistant_agent ----------\n",
"In Tanganyika's depths, where light gently weaves, \n",
"Silver reflections dance on ancient water's face, \n",
"Whispered stories of time in the rippling leaves. \n",
"[Prompt tokens: 29, Completion tokens: 36]\n",
"---------- Summary ----------\n",
"Number of messages: 2\n",
"Finish reason: Maximum number of messages 2 reached, current message count: 2\n",
"Total prompt tokens: 29\n",
"Total completion tokens: 36\n",
"Duration: 1.16 seconds\n"
]
}
],
"source": [
"# Define a team.\n",
"assistant_agent = AssistantAgent(\n",
" name=\"assistant_agent\",\n",
" system_message=\"You are a helpful assistant\",\n",
" model_client=OpenAIChatCompletionClient(\n",
" model=\"gpt-4o-2024-08-06\",\n",
" ),\n",
")\n",
"agent_team = RoundRobinGroupChat([assistant_agent], termination_condition=MaxMessageTermination(max_messages=2))\n",
"\n",
"# Run the team and stream messages to the console.\n",
"stream = agent_team.run_stream(task=\"Write a beautiful poem 3-line about lake tangayika\")\n",
"\n",
"# Use asyncio.run(...) when running in a script.\n",
"await Console(stream)\n",
"\n",
"# Save the state of the agent team.\n",
"team_state = await agent_team.save_state()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we reset the team (simulating instantiation of the team), and ask the question `What was the last line of the poem you wrote?`, we see that the team is unable to accomplish this as there is no reference to the previous run."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"---------- user ----------\n",
"What was the last line of the poem you wrote?\n",
"---------- assistant_agent ----------\n",
"I don't write poems on my own, but I can help create one with you or try to recall a specific poem if you have one in mind. Let me know what you'd like to do!\n",
"[Prompt tokens: 28, Completion tokens: 39]\n",
"---------- Summary ----------\n",
"Number of messages: 2\n",
"Finish reason: Maximum number of messages 2 reached, current message count: 2\n",
"Total prompt tokens: 28\n",
"Total completion tokens: 39\n",
"Duration: 0.95 seconds\n"
]
},
{
"data": {
"text/plain": [
"TaskResult(messages=[TextMessage(source='user', models_usage=None, type='TextMessage', content='What was the last line of the poem you wrote?'), TextMessage(source='assistant_agent', models_usage=RequestUsage(prompt_tokens=28, completion_tokens=39), type='TextMessage', content=\"I don't write poems on my own, but I can help create one with you or try to recall a specific poem if you have one in mind. Let me know what you'd like to do!\")], stop_reason='Maximum number of messages 2 reached, current message count: 2')"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await agent_team.reset()\n",
"stream = agent_team.run_stream(task=\"What was the last line of the poem you wrote?\")\n",
"await Console(stream)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we load the state of the team and ask the same question. We see that the team is able to accurately return the last line of the poem it wrote.\n",
"\n",
"Note: You can serialize the state of the team to a file and load it back later."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'type': 'TeamState', 'version': '1.0.0', 'agent_states': {'group_chat_manager/c80054be-efb2-4bc7-ba0d-900962092c44': {'type': 'RoundRobinManagerState', 'version': '1.0.0', 'message_thread': [{'source': 'user', 'models_usage': None, 'type': 'TextMessage', 'content': 'Write a beautiful poem 3-line about lake tangayika'}, {'source': 'assistant_agent', 'models_usage': {'prompt_tokens': 29, 'completion_tokens': 36}, 'type': 'TextMessage', 'content': \"In Tanganyika's depths, where light gently weaves, \\nSilver reflections dance on ancient water's face, \\nWhispered stories of time in the rippling leaves. \"}], 'current_turn': 0, 'next_speaker_index': 0}, 'collect_output_messages/c80054be-efb2-4bc7-ba0d-900962092c44': {}, 'assistant_agent/c80054be-efb2-4bc7-ba0d-900962092c44': {'type': 'ChatAgentContainerState', 'version': '1.0.0', 'agent_state': {'type': 'AssistantAgentState', 'version': '1.0.0', 'llm_messages': [{'content': 'Write a beautiful poem 3-line about lake tangayika', 'source': 'user', 'type': 'UserMessage'}, {'content': \"In Tanganyika's depths, where light gently weaves, \\nSilver reflections dance on ancient water's face, \\nWhispered stories of time in the rippling leaves. \", 'source': 'assistant_agent', 'type': 'AssistantMessage'}]}, 'message_buffer': []}}, 'team_id': 'c80054be-efb2-4bc7-ba0d-900962092c44'}\n",
"---------- user ----------\n",
"What was the last line of the poem you wrote?\n",
"---------- assistant_agent ----------\n",
"The last line of the poem I wrote was: \n",
"\"Whispered stories of time in the rippling leaves.\"\n",
"[Prompt tokens: 88, Completion tokens: 24]\n",
"---------- Summary ----------\n",
"Number of messages: 2\n",
"Finish reason: Maximum number of messages 2 reached, current message count: 2\n",
"Total prompt tokens: 88\n",
"Total completion tokens: 24\n",
"Duration: 0.79 seconds\n"
]
},
{
"data": {
"text/plain": [
"TaskResult(messages=[TextMessage(source='user', models_usage=None, type='TextMessage', content='What was the last line of the poem you wrote?'), TextMessage(source='assistant_agent', models_usage=RequestUsage(prompt_tokens=88, completion_tokens=24), type='TextMessage', content='The last line of the poem I wrote was: \\n\"Whispered stories of time in the rippling leaves.\"')], stop_reason='Maximum number of messages 2 reached, current message count: 2')"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"print(team_state)\n",
"\n",
"# Load team state.\n",
"await agent_team.load_state(team_state)\n",
"stream = agent_team.run_stream(task=\"What was the last line of the poem you wrote?\")\n",
"await Console(stream)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,283 +1,283 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# User Approval for Tool Execution using Intervention Handler\n",
"\n",
"This cookbook shows how to intercept the tool execution using\n",
"an intervention hanlder, and prompt the user for permission to execute the tool."
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [],
"source": [
"from dataclasses import dataclass\n",
"from typing import Any, List\n",
"\n",
"from autogen_core import AgentId, AgentType, FunctionCall, MessageContext, RoutedAgent, message_handler\n",
"from autogen_core.application import SingleThreadedAgentRuntime\n",
"from autogen_core.base.intervention import DefaultInterventionHandler, DropMessage\n",
"from autogen_core.components.models import (\n",
" ChatCompletionClient,\n",
" LLMMessage,\n",
" SystemMessage,\n",
" UserMessage,\n",
")\n",
"from autogen_core.components.tools import PythonCodeExecutionTool, ToolSchema\n",
"from autogen_core.tool_agent import ToolAgent, ToolException, tool_agent_caller_loop\n",
"from autogen_ext.code_executors import DockerCommandLineCodeExecutor\n",
"from autogen_ext.models import OpenAIChatCompletionClient"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's define a simple message type that carries a string content."
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
"@dataclass\n",
"class Message:\n",
" content: str"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's create a simple tool use agent that is capable of using tools through a\n",
"{py:class}`~autogen_core.components.tool_agent.ToolAgent`."
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
"class ToolUseAgent(RoutedAgent):\n",
" \"\"\"An agent that uses tools to perform tasks. It executes the tools\n",
" by itself by sending the tool execution task to a ToolAgent.\"\"\"\n",
"\n",
" def __init__(\n",
" self,\n",
" description: str,\n",
" system_messages: List[SystemMessage],\n",
" model_client: ChatCompletionClient,\n",
" tool_schema: List[ToolSchema],\n",
" tool_agent_type: AgentType,\n",
" ) -> None:\n",
" super().__init__(description)\n",
" self._model_client = model_client\n",
" self._system_messages = system_messages\n",
" self._tool_schema = tool_schema\n",
" self._tool_agent_id = AgentId(type=tool_agent_type, key=self.id.key)\n",
"\n",
" @message_handler\n",
" async def handle_user_message(self, message: Message, ctx: MessageContext) -> Message:\n",
" \"\"\"Handle a user message, execute the model and tools, and returns the response.\"\"\"\n",
" session: List[LLMMessage] = [UserMessage(content=message.content, source=\"User\")]\n",
" # Use the tool agent to execute the tools, and get the output messages.\n",
" output_messages = await tool_agent_caller_loop(\n",
" self,\n",
" tool_agent_id=self._tool_agent_id,\n",
" model_client=self._model_client,\n",
" input_messages=session,\n",
" tool_schema=self._tool_schema,\n",
" cancellation_token=ctx.cancellation_token,\n",
" )\n",
" # Extract the final response from the output messages.\n",
" final_response = output_messages[-1].content\n",
" assert isinstance(final_response, str)\n",
" return Message(content=final_response)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The tool use agent sends tool call requests to the tool agent to execute tools,\n",
"so we can intercept the messages sent by the tool use agent to the tool agent\n",
"to prompt the user for permission to execute the tool.\n",
"\n",
"Let's create an intervention handler that intercepts the messages and prompts\n",
"user for before allowing the tool execution."
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [],
"source": [
"class ToolInterventionHandler(DefaultInterventionHandler):\n",
" async def on_send(self, message: Any, *, sender: AgentId | None, recipient: AgentId) -> Any | type[DropMessage]:\n",
" if isinstance(message, FunctionCall):\n",
" # Request user prompt for tool execution.\n",
" user_input = input(\n",
" f\"Function call: {message.name}\\nArguments: {message.arguments}\\nDo you want to execute the tool? (y/n): \"\n",
" )\n",
" if user_input.strip().lower() != \"y\":\n",
" raise ToolException(content=\"User denied tool execution.\", call_id=message.id)\n",
" return message"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, we can create a runtime with the intervention handler registered."
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {},
"outputs": [],
"source": [
"# Create the runtime with the intervention handler.\n",
"runtime = SingleThreadedAgentRuntime(intervention_handlers=[ToolInterventionHandler()])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example, we will use a tool for Python code execution.\n",
"First, we create a Docker-based command-line code executor\n",
"using {py:class}`~autogen_core.components.code_executor.docker_executorCommandLineCodeExecutor`,\n",
"and then use it to instantiate a built-in Python code execution tool\n",
"{py:class}`~autogen_core.components.tools.PythonCodeExecutionTool`\n",
"that runs code in a Docker container."
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {},
"outputs": [],
"source": [
"# Create the docker executor for the Python code execution tool.\n",
"docker_executor = DockerCommandLineCodeExecutor()\n",
"\n",
"# Create the Python code execution tool.\n",
"python_tool = PythonCodeExecutionTool(executor=docker_executor)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Register the agents with tools and tool schema."
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AgentType(type='tool_enabled_agent')"
]
},
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Register agents.\n",
"tool_agent_type = await ToolAgent.register(\n",
" runtime,\n",
" \"tool_executor_agent\",\n",
" lambda: ToolAgent(\n",
" description=\"Tool Executor Agent\",\n",
" tools=[python_tool],\n",
" ),\n",
")\n",
"await ToolUseAgent.register(\n",
" runtime,\n",
" \"tool_enabled_agent\",\n",
" lambda: ToolUseAgent(\n",
" description=\"Tool Use Agent\",\n",
" system_messages=[SystemMessage(\"You are a helpful AI Assistant. Use your tools to solve problems.\")],\n",
" model_client=OpenAIChatCompletionClient(model=\"gpt-4o-mini\"),\n",
" tool_schema=[python_tool.schema],\n",
" tool_agent_type=tool_agent_type,\n",
" ),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Run the agents by starting the runtime and sending a message to the tool use agent.\n",
"The intervention handler will prompt you for permission to execute the tool."
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The output of the code is: **Hello, World!**\n"
]
}
],
"source": [
"# Start the runtime and the docker executor.\n",
"await docker_executor.start()\n",
"runtime.start()\n",
"\n",
"# Send a task to the tool user.\n",
"response = await runtime.send_message(\n",
" Message(\"Run the following Python code: print('Hello, World!')\"), AgentId(\"tool_enabled_agent\", \"default\")\n",
")\n",
"print(response.content)\n",
"\n",
"# Stop the runtime and the docker executor.\n",
"await runtime.stop()\n",
"await docker_executor.stop()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# User Approval for Tool Execution using Intervention Handler\n",
"\n",
"This cookbook shows how to intercept the tool execution using\n",
"an intervention hanlder, and prompt the user for permission to execute the tool."
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [],
"source": [
"from dataclasses import dataclass\n",
"from typing import Any, List\n",
"\n",
"from autogen_core import AgentId, AgentType, FunctionCall, MessageContext, RoutedAgent, message_handler\n",
"from autogen_core.application import SingleThreadedAgentRuntime\n",
"from autogen_core.base.intervention import DefaultInterventionHandler, DropMessage\n",
"from autogen_core.components.models import (\n",
" ChatCompletionClient,\n",
" LLMMessage,\n",
" SystemMessage,\n",
" UserMessage,\n",
")\n",
"from autogen_core.components.tools import PythonCodeExecutionTool, ToolSchema\n",
"from autogen_core.tool_agent import ToolAgent, ToolException, tool_agent_caller_loop\n",
"from autogen_ext.code_executors import DockerCommandLineCodeExecutor\n",
"from autogen_ext.models import OpenAIChatCompletionClient"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's define a simple message type that carries a string content."
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
"@dataclass\n",
"class Message:\n",
" content: str"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's create a simple tool use agent that is capable of using tools through a\n",
"{py:class}`~autogen_core.components.tool_agent.ToolAgent`."
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
"class ToolUseAgent(RoutedAgent):\n",
" \"\"\"An agent that uses tools to perform tasks. It executes the tools\n",
" by itself by sending the tool execution task to a ToolAgent.\"\"\"\n",
"\n",
" def __init__(\n",
" self,\n",
" description: str,\n",
" system_messages: List[SystemMessage],\n",
" model_client: ChatCompletionClient,\n",
" tool_schema: List[ToolSchema],\n",
" tool_agent_type: AgentType,\n",
" ) -> None:\n",
" super().__init__(description)\n",
" self._model_client = model_client\n",
" self._system_messages = system_messages\n",
" self._tool_schema = tool_schema\n",
" self._tool_agent_id = AgentId(type=tool_agent_type, key=self.id.key)\n",
"\n",
" @message_handler\n",
" async def handle_user_message(self, message: Message, ctx: MessageContext) -> Message:\n",
" \"\"\"Handle a user message, execute the model and tools, and returns the response.\"\"\"\n",
" session: List[LLMMessage] = [UserMessage(content=message.content, source=\"User\")]\n",
" # Use the tool agent to execute the tools, and get the output messages.\n",
" output_messages = await tool_agent_caller_loop(\n",
" self,\n",
" tool_agent_id=self._tool_agent_id,\n",
" model_client=self._model_client,\n",
" input_messages=session,\n",
" tool_schema=self._tool_schema,\n",
" cancellation_token=ctx.cancellation_token,\n",
" )\n",
" # Extract the final response from the output messages.\n",
" final_response = output_messages[-1].content\n",
" assert isinstance(final_response, str)\n",
" return Message(content=final_response)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The tool use agent sends tool call requests to the tool agent to execute tools,\n",
"so we can intercept the messages sent by the tool use agent to the tool agent\n",
"to prompt the user for permission to execute the tool.\n",
"\n",
"Let's create an intervention handler that intercepts the messages and prompts\n",
"user for before allowing the tool execution."
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [],
"source": [
"class ToolInterventionHandler(DefaultInterventionHandler):\n",
" async def on_send(self, message: Any, *, sender: AgentId | None, recipient: AgentId) -> Any | type[DropMessage]:\n",
" if isinstance(message, FunctionCall):\n",
" # Request user prompt for tool execution.\n",
" user_input = input(\n",
" f\"Function call: {message.name}\\nArguments: {message.arguments}\\nDo you want to execute the tool? (y/n): \"\n",
" )\n",
" if user_input.strip().lower() != \"y\":\n",
" raise ToolException(content=\"User denied tool execution.\", call_id=message.id)\n",
" return message"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, we can create a runtime with the intervention handler registered."
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {},
"outputs": [],
"source": [
"# Create the runtime with the intervention handler.\n",
"runtime = SingleThreadedAgentRuntime(intervention_handlers=[ToolInterventionHandler()])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example, we will use a tool for Python code execution.\n",
"First, we create a Docker-based command-line code executor\n",
"using {py:class}`~autogen_core.components.code_executor.docker_executorCommandLineCodeExecutor`,\n",
"and then use it to instantiate a built-in Python code execution tool\n",
"{py:class}`~autogen_core.components.tools.PythonCodeExecutionTool`\n",
"that runs code in a Docker container."
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {},
"outputs": [],
"source": [
"# Create the docker executor for the Python code execution tool.\n",
"docker_executor = DockerCommandLineCodeExecutor()\n",
"\n",
"# Create the Python code execution tool.\n",
"python_tool = PythonCodeExecutionTool(executor=docker_executor)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Register the agents with tools and tool schema."
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AgentType(type='tool_enabled_agent')"
]
},
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Register agents.\n",
"tool_agent_type = await ToolAgent.register(\n",
" runtime,\n",
" \"tool_executor_agent\",\n",
" lambda: ToolAgent(\n",
" description=\"Tool Executor Agent\",\n",
" tools=[python_tool],\n",
" ),\n",
")\n",
"await ToolUseAgent.register(\n",
" runtime,\n",
" \"tool_enabled_agent\",\n",
" lambda: ToolUseAgent(\n",
" description=\"Tool Use Agent\",\n",
" system_messages=[SystemMessage(content=\"You are a helpful AI Assistant. Use your tools to solve problems.\")],\n",
" model_client=OpenAIChatCompletionClient(model=\"gpt-4o-mini\"),\n",
" tool_schema=[python_tool.schema],\n",
" tool_agent_type=tool_agent_type,\n",
" ),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Run the agents by starting the runtime and sending a message to the tool use agent.\n",
"The intervention handler will prompt you for permission to execute the tool."
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The output of the code is: **Hello, World!**\n"
]
}
],
"source": [
"# Start the runtime and the docker executor.\n",
"await docker_executor.start()\n",
"runtime.start()\n",
"\n",
"# Send a task to the tool user.\n",
"response = await runtime.send_message(\n",
" Message(\"Run the following Python code: print('Hello, World!')\"), AgentId(\"tool_enabled_agent\", \"default\")\n",
")\n",
"print(response.content)\n",
"\n",
"# Stop the runtime and the docker executor.\n",
"await runtime.stop()\n",
"await docker_executor.stop()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -158,7 +158,7 @@
" super().__init__(description=description)\n",
" self._group_chat_topic_type = group_chat_topic_type\n",
" self._model_client = model_client\n",
" self._system_message = SystemMessage(system_message)\n",
" self._system_message = SystemMessage(content=system_message)\n",
" self._chat_history: List[LLMMessage] = []\n",
"\n",
" @message_handler\n",
@@ -427,7 +427,7 @@
"Read the above conversation. Then select the next role from {participants} to play. Only return the role.\n",
"\"\"\"\n",
" system_message = SystemMessage(\n",
" selector_prompt.format(\n",
" content=selector_prompt.format(\n",
" roles=roles,\n",
" history=history,\n",
" participants=str(\n",

View File

@@ -1,315 +1,315 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tools\n",
"\n",
"Tools are code that can be executed by an agent to perform actions. A tool\n",
"can be a simple function such as a calculator, or an API call to a third-party service\n",
"such as stock price lookup or weather forecast.\n",
"In the context of AI agents, tools are designed to be executed by agents in\n",
"response to model-generated function calls.\n",
"\n",
"AutoGen provides the {py:mod}`autogen_core.components.tools` module with a suite of built-in\n",
"tools and utilities for creating and running custom tools."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Built-in Tools\n",
"\n",
"One of the built-in tools is the {py:class}`~autogen_core.components.tools.PythonCodeExecutionTool`,\n",
"which allows agents to execute Python code snippets.\n",
"\n",
"Here is how you create the tool and use it."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hello, world!\n",
"\n"
]
}
],
"source": [
"from autogen_core import CancellationToken\n",
"from autogen_core.components.tools import PythonCodeExecutionTool\n",
"from autogen_ext.code_executors import DockerCommandLineCodeExecutor\n",
"\n",
"# Create the tool.\n",
"code_executor = DockerCommandLineCodeExecutor()\n",
"await code_executor.start()\n",
"code_execution_tool = PythonCodeExecutionTool(code_executor)\n",
"cancellation_token = CancellationToken()\n",
"\n",
"# Use the tool directly without an agent.\n",
"code = \"print('Hello, world!')\"\n",
"result = await code_execution_tool.run_json({\"code\": code}, cancellation_token)\n",
"print(code_execution_tool.return_value_as_string(result))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The {py:class}`~autogen_core.components.code_executor.docker_executorCommandLineCodeExecutor`\n",
"class is a built-in code executor that runs Python code snippets in a subprocess\n",
"in the local command line environment.\n",
"The {py:class}`~autogen_core.components.tools.PythonCodeExecutionTool` class wraps the code executor\n",
"and provides a simple interface to execute Python code snippets.\n",
"\n",
"Other built-in tools will be added in the future."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Custom Function Tools\n",
"\n",
"A tool can also be a simple Python function that performs a specific action.\n",
"To create a custom function tool, you just need to create a Python function\n",
"and use the {py:class}`~autogen_core.components.tools.FunctionTool` class to wrap it.\n",
"\n",
"The {py:class}`~autogen_core.components.tools.FunctionTool` class uses descriptions and type annotations\n",
"to inform the LLM when and how to use a given function. The description provides context\n",
"about the functions purpose and intended use cases, while type annotations inform the LLM about\n",
"the expected parameters and return type.\n",
"\n",
"For example, a simple tool to obtain the stock price of a company might look like this:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"80.44429939059668\n"
]
}
],
"source": [
"import random\n",
"\n",
"from autogen_core import CancellationToken\n",
"from autogen_core.components.tools import FunctionTool\n",
"from typing_extensions import Annotated\n",
"\n",
"\n",
"async def get_stock_price(ticker: str, date: Annotated[str, \"Date in YYYY/MM/DD\"]) -> float:\n",
" # Returns a random stock price for demonstration purposes.\n",
" return random.uniform(10, 200)\n",
"\n",
"\n",
"# Create a function tool.\n",
"stock_price_tool = FunctionTool(get_stock_price, description=\"Get the stock price.\")\n",
"\n",
"# Run the tool.\n",
"cancellation_token = CancellationToken()\n",
"result = await stock_price_tool.run_json({\"ticker\": \"AAPL\", \"date\": \"2021/01/01\"}, cancellation_token)\n",
"\n",
"# Print the result.\n",
"print(stock_price_tool.return_value_as_string(result))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tool-Equipped Agent\n",
"\n",
"To use tools with an agent, you can use {py:class}`~autogen_core.components.tool_agent.ToolAgent`,\n",
"by using it in a composition pattern.\n",
"Here is an example tool-use agent that uses {py:class}`~autogen_core.components.tool_agent.ToolAgent`\n",
"as an inner agent for executing tools."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from dataclasses import dataclass\n",
"from typing import List\n",
"\n",
"from autogen_core import AgentId, AgentInstantiationContext, MessageContext, RoutedAgent, message_handler\n",
"from autogen_core.application import SingleThreadedAgentRuntime\n",
"from autogen_core.components.models import (\n",
" ChatCompletionClient,\n",
" LLMMessage,\n",
" SystemMessage,\n",
" UserMessage,\n",
")\n",
"from autogen_core.components.tools import FunctionTool, Tool, ToolSchema\n",
"from autogen_core.tool_agent import ToolAgent, tool_agent_caller_loop\n",
"from autogen_ext.models import OpenAIChatCompletionClient\n",
"\n",
"\n",
"@dataclass\n",
"class Message:\n",
" content: str\n",
"\n",
"\n",
"class ToolUseAgent(RoutedAgent):\n",
" def __init__(self, model_client: ChatCompletionClient, tool_schema: List[ToolSchema], tool_agent_type: str) -> None:\n",
" super().__init__(\"An agent with tools\")\n",
" self._system_messages: List[LLMMessage] = [SystemMessage(\"You are a helpful AI assistant.\")]\n",
" self._model_client = model_client\n",
" self._tool_schema = tool_schema\n",
" self._tool_agent_id = AgentId(tool_agent_type, self.id.key)\n",
"\n",
" @message_handler\n",
" async def handle_user_message(self, message: Message, ctx: MessageContext) -> Message:\n",
" # Create a session of messages.\n",
" session: List[LLMMessage] = [UserMessage(content=message.content, source=\"user\")]\n",
" # Run the caller loop to handle tool calls.\n",
" messages = await tool_agent_caller_loop(\n",
" self,\n",
" tool_agent_id=self._tool_agent_id,\n",
" model_client=self._model_client,\n",
" input_messages=session,\n",
" tool_schema=self._tool_schema,\n",
" cancellation_token=ctx.cancellation_token,\n",
" )\n",
" # Return the final response.\n",
" assert isinstance(messages[-1].content, str)\n",
" return Message(content=messages[-1].content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `ToolUseAgent` class uses a convenience function {py:meth}`~autogen_core.components.tool_agent.tool_agent_caller_loop`, \n",
"to handle the interaction between the model and the tool agent.\n",
"The core idea can be described using a simple control flow graph:\n",
"\n",
"![ToolUseAgent control flow graph](tool-use-agent-cfg.svg)\n",
"\n",
"The `ToolUseAgent`'s `handle_user_message` handler handles messages from the user,\n",
"and determines whether the model has generated a tool call.\n",
"If the model has generated tool calls, then the handler sends a function call\n",
"message to the {py:class}`~autogen_core.components.tool_agent.ToolAgent` agent\n",
"to execute the tools,\n",
"and then queries the model again with the results of the tool calls.\n",
"This process continues until the model stops generating tool calls,\n",
"at which point the final response is returned to the user.\n",
"\n",
"By having the tool execution logic in a separate agent,\n",
"we expose the model-tool interactions to the agent runtime as messages, so the tool executions\n",
"can be observed externally and intercepted if necessary.\n",
"\n",
"To run the agent, we need to create a runtime and register the agent."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AgentType(type='tool_use_agent')"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Create a runtime.\n",
"runtime = SingleThreadedAgentRuntime()\n",
"# Create the tools.\n",
"tools: List[Tool] = [FunctionTool(get_stock_price, description=\"Get the stock price.\")]\n",
"# Register the agents.\n",
"await ToolAgent.register(runtime, \"tool_executor_agent\", lambda: ToolAgent(\"tool executor agent\", tools))\n",
"await ToolUseAgent.register(\n",
" runtime,\n",
" \"tool_use_agent\",\n",
" lambda: ToolUseAgent(\n",
" OpenAIChatCompletionClient(model=\"gpt-4o-mini\"), [tool.schema for tool in tools], \"tool_executor_agent\"\n",
" ),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This example uses the {py:class}`autogen_core.components.models.OpenAIChatCompletionClient`,\n",
"for Azure OpenAI and other clients, see [Model Clients](./model-clients.ipynb).\n",
"Let's test the agent with a question about stock price."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The stock price of NVDA (NVIDIA Corporation) on June 1, 2024, was approximately $179.46.\n"
]
}
],
"source": [
"# Start processing messages.\n",
"runtime.start()\n",
"# Send a direct message to the tool agent.\n",
"tool_use_agent = AgentId(\"tool_use_agent\", \"default\")\n",
"response = await runtime.send_message(Message(\"What is the stock price of NVDA on 2024/06/01?\"), tool_use_agent)\n",
"print(response.content)\n",
"# Stop processing messages.\n",
"await runtime.stop()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "autogen_core",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tools\n",
"\n",
"Tools are code that can be executed by an agent to perform actions. A tool\n",
"can be a simple function such as a calculator, or an API call to a third-party service\n",
"such as stock price lookup or weather forecast.\n",
"In the context of AI agents, tools are designed to be executed by agents in\n",
"response to model-generated function calls.\n",
"\n",
"AutoGen provides the {py:mod}`autogen_core.components.tools` module with a suite of built-in\n",
"tools and utilities for creating and running custom tools."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Built-in Tools\n",
"\n",
"One of the built-in tools is the {py:class}`~autogen_core.components.tools.PythonCodeExecutionTool`,\n",
"which allows agents to execute Python code snippets.\n",
"\n",
"Here is how you create the tool and use it."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hello, world!\n",
"\n"
]
}
],
"source": [
"from autogen_core import CancellationToken\n",
"from autogen_core.components.tools import PythonCodeExecutionTool\n",
"from autogen_ext.code_executors import DockerCommandLineCodeExecutor\n",
"\n",
"# Create the tool.\n",
"code_executor = DockerCommandLineCodeExecutor()\n",
"await code_executor.start()\n",
"code_execution_tool = PythonCodeExecutionTool(code_executor)\n",
"cancellation_token = CancellationToken()\n",
"\n",
"# Use the tool directly without an agent.\n",
"code = \"print('Hello, world!')\"\n",
"result = await code_execution_tool.run_json({\"code\": code}, cancellation_token)\n",
"print(code_execution_tool.return_value_as_string(result))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The {py:class}`~autogen_core.components.code_executor.docker_executorCommandLineCodeExecutor`\n",
"class is a built-in code executor that runs Python code snippets in a subprocess\n",
"in the local command line environment.\n",
"The {py:class}`~autogen_core.components.tools.PythonCodeExecutionTool` class wraps the code executor\n",
"and provides a simple interface to execute Python code snippets.\n",
"\n",
"Other built-in tools will be added in the future."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Custom Function Tools\n",
"\n",
"A tool can also be a simple Python function that performs a specific action.\n",
"To create a custom function tool, you just need to create a Python function\n",
"and use the {py:class}`~autogen_core.components.tools.FunctionTool` class to wrap it.\n",
"\n",
"The {py:class}`~autogen_core.components.tools.FunctionTool` class uses descriptions and type annotations\n",
"to inform the LLM when and how to use a given function. The description provides context\n",
"about the functions purpose and intended use cases, while type annotations inform the LLM about\n",
"the expected parameters and return type.\n",
"\n",
"For example, a simple tool to obtain the stock price of a company might look like this:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"80.44429939059668\n"
]
}
],
"source": [
"import random\n",
"\n",
"from autogen_core import CancellationToken\n",
"from autogen_core.components.tools import FunctionTool\n",
"from typing_extensions import Annotated\n",
"\n",
"\n",
"async def get_stock_price(ticker: str, date: Annotated[str, \"Date in YYYY/MM/DD\"]) -> float:\n",
" # Returns a random stock price for demonstration purposes.\n",
" return random.uniform(10, 200)\n",
"\n",
"\n",
"# Create a function tool.\n",
"stock_price_tool = FunctionTool(get_stock_price, description=\"Get the stock price.\")\n",
"\n",
"# Run the tool.\n",
"cancellation_token = CancellationToken()\n",
"result = await stock_price_tool.run_json({\"ticker\": \"AAPL\", \"date\": \"2021/01/01\"}, cancellation_token)\n",
"\n",
"# Print the result.\n",
"print(stock_price_tool.return_value_as_string(result))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tool-Equipped Agent\n",
"\n",
"To use tools with an agent, you can use {py:class}`~autogen_core.components.tool_agent.ToolAgent`,\n",
"by using it in a composition pattern.\n",
"Here is an example tool-use agent that uses {py:class}`~autogen_core.components.tool_agent.ToolAgent`\n",
"as an inner agent for executing tools."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from dataclasses import dataclass\n",
"from typing import List\n",
"\n",
"from autogen_core import AgentId, AgentInstantiationContext, MessageContext, RoutedAgent, message_handler\n",
"from autogen_core.application import SingleThreadedAgentRuntime\n",
"from autogen_core.components.models import (\n",
" ChatCompletionClient,\n",
" LLMMessage,\n",
" SystemMessage,\n",
" UserMessage,\n",
")\n",
"from autogen_core.components.tools import FunctionTool, Tool, ToolSchema\n",
"from autogen_core.tool_agent import ToolAgent, tool_agent_caller_loop\n",
"from autogen_ext.models import OpenAIChatCompletionClient\n",
"\n",
"\n",
"@dataclass\n",
"class Message:\n",
" content: str\n",
"\n",
"\n",
"class ToolUseAgent(RoutedAgent):\n",
" def __init__(self, model_client: ChatCompletionClient, tool_schema: List[ToolSchema], tool_agent_type: str) -> None:\n",
" super().__init__(\"An agent with tools\")\n",
" self._system_messages: List[LLMMessage] = [SystemMessage(content=\"You are a helpful AI assistant.\")]\n",
" self._model_client = model_client\n",
" self._tool_schema = tool_schema\n",
" self._tool_agent_id = AgentId(tool_agent_type, self.id.key)\n",
"\n",
" @message_handler\n",
" async def handle_user_message(self, message: Message, ctx: MessageContext) -> Message:\n",
" # Create a session of messages.\n",
" session: List[LLMMessage] = [UserMessage(content=message.content, source=\"user\")]\n",
" # Run the caller loop to handle tool calls.\n",
" messages = await tool_agent_caller_loop(\n",
" self,\n",
" tool_agent_id=self._tool_agent_id,\n",
" model_client=self._model_client,\n",
" input_messages=session,\n",
" tool_schema=self._tool_schema,\n",
" cancellation_token=ctx.cancellation_token,\n",
" )\n",
" # Return the final response.\n",
" assert isinstance(messages[-1].content, str)\n",
" return Message(content=messages[-1].content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `ToolUseAgent` class uses a convenience function {py:meth}`~autogen_core.components.tool_agent.tool_agent_caller_loop`, \n",
"to handle the interaction between the model and the tool agent.\n",
"The core idea can be described using a simple control flow graph:\n",
"\n",
"![ToolUseAgent control flow graph](tool-use-agent-cfg.svg)\n",
"\n",
"The `ToolUseAgent`'s `handle_user_message` handler handles messages from the user,\n",
"and determines whether the model has generated a tool call.\n",
"If the model has generated tool calls, then the handler sends a function call\n",
"message to the {py:class}`~autogen_core.components.tool_agent.ToolAgent` agent\n",
"to execute the tools,\n",
"and then queries the model again with the results of the tool calls.\n",
"This process continues until the model stops generating tool calls,\n",
"at which point the final response is returned to the user.\n",
"\n",
"By having the tool execution logic in a separate agent,\n",
"we expose the model-tool interactions to the agent runtime as messages, so the tool executions\n",
"can be observed externally and intercepted if necessary.\n",
"\n",
"To run the agent, we need to create a runtime and register the agent."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AgentType(type='tool_use_agent')"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Create a runtime.\n",
"runtime = SingleThreadedAgentRuntime()\n",
"# Create the tools.\n",
"tools: List[Tool] = [FunctionTool(get_stock_price, description=\"Get the stock price.\")]\n",
"# Register the agents.\n",
"await ToolAgent.register(runtime, \"tool_executor_agent\", lambda: ToolAgent(\"tool executor agent\", tools))\n",
"await ToolUseAgent.register(\n",
" runtime,\n",
" \"tool_use_agent\",\n",
" lambda: ToolUseAgent(\n",
" OpenAIChatCompletionClient(model=\"gpt-4o-mini\"), [tool.schema for tool in tools], \"tool_executor_agent\"\n",
" ),\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This example uses the {py:class}`autogen_core.components.models.OpenAIChatCompletionClient`,\n",
"for Azure OpenAI and other clients, see [Model Clients](./model-clients.ipynb).\n",
"Let's test the agent with a question about stock price."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The stock price of NVDA (NVIDIA Corporation) on June 1, 2024, was approximately $179.46.\n"
]
}
],
"source": [
"# Start processing messages.\n",
"runtime.start()\n",
"# Send a direct message to the tool agent.\n",
"tool_use_agent = AgentId(\"tool_use_agent\", \"default\")\n",
"response = await runtime.send_message(Message(\"What is the stock price of NVDA on 2024/06/01?\"), tool_use_agent)\n",
"print(response.content)\n",
"# Stop processing messages.\n",
"await runtime.stop()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "autogen_core",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}