mirror of
https://github.com/microsoft/autogen.git
synced 2026-04-20 03:02:16 -04:00
AgentChat tutorial update to include model context usage and langchain tool (#4843)
* Doc update to include model context usage * add langchain tools * update langchain tool wrapper api doc * updat * update * format * add langchain experimental dev dep * type * Fix type * Fix some types in langchain adapter * type ignores
This commit is contained in:
@@ -26,12 +26,13 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from autogen_agentchat.agents import AssistantAgent\n",
|
||||
"from autogen_agentchat.messages import TextMessage\n",
|
||||
"from autogen_agentchat.ui import Console\n",
|
||||
"from autogen_core import CancellationToken\n",
|
||||
"from autogen_ext.models.openai import OpenAIChatCompletionClient\n",
|
||||
"\n",
|
||||
@@ -113,42 +114,13 @@
|
||||
"```{note}\n",
|
||||
"Unlike in v0.2 AgentChat, the tools are executed by the same agent directly within\n",
|
||||
"the same call to {py:meth}`~autogen_agentchat.agents.AssistantAgent.on_messages`.\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"## User Proxy Agent\n",
|
||||
"\n",
|
||||
"{py:class}`~autogen_agentchat.agents.UserProxyAgent` is a built-in agent that\n",
|
||||
"provides one way for a user to intervene in the process. This agent will put the team in a temporary blocking state, and thus any exceptions or runtime failures while in the blocked state will result in a deadlock. It is strongly advised that this agent be coupled with a timeout mechanic and that all errors and exceptions emanating from it are handled."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from autogen_agentchat.agents import UserProxyAgent\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"async def user_proxy_run() -> None:\n",
|
||||
" user_proxy_agent = UserProxyAgent(\"user_proxy\")\n",
|
||||
" response = await user_proxy_agent.on_messages(\n",
|
||||
" [TextMessage(content=\"What is your name? \", source=\"user\")], cancellation_token=CancellationToken()\n",
|
||||
" )\n",
|
||||
" print(f\"Your name is {response.chat_message.content}\")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Use asyncio.run(user_proxy_run()) when running in a script.\n",
|
||||
"await user_proxy_run()"
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The User Proxy agent is ideally used for on-demand human-in-the-loop interactions for scenarios such as Just In Time approvals, human feedback, alerts, etc. For slower user interactions, consider terminating a team using a termination condition and start another one from\n",
|
||||
"{py:meth}`~autogen_agentchat.base.TaskRunner.run` or {py:meth}`~autogen_agentchat.base.TaskRunner.run_stream` with another message.\n",
|
||||
"\n",
|
||||
"## Streaming Messages\n",
|
||||
"\n",
|
||||
"We can also stream each message as it is generated by the agent by using the\n",
|
||||
@@ -183,9 +155,6 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from autogen_agentchat.ui import Console\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"async def assistant_run_stream() -> None:\n",
|
||||
" # Option 1: read each message from the stream (as shown in the previous example).\n",
|
||||
" # async for message in agent.on_messages_stream(\n",
|
||||
@@ -216,15 +185,161 @@
|
||||
"with the final item being the response message in the {py:attr}`~autogen_agentchat.base.Response.chat_message` attribute.\n",
|
||||
"\n",
|
||||
"From the messages, you can observe that the assistant agent utilized the `web_search` tool to\n",
|
||||
"gather information and responded based on the search results.\n",
|
||||
"gather information and responded based on the search results."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Using Tools\n",
|
||||
"\n",
|
||||
"## Understanding Tool Calling\n",
|
||||
"Large Language Models (LLMs) are typically limited to generating text or code responses. \n",
|
||||
"However, many complex tasks benefit from the ability to use external tools that perform specific actions,\n",
|
||||
"such as fetching data from APIs or databases.\n",
|
||||
"\n",
|
||||
"Large Language Models (LLMs) are typically limited to generating text or code responses. However, many complex tasks benefit from the ability to use external tools that perform specific actions, such as fetching data from APIs or databases.\n",
|
||||
"To address this limitation, modern LLMs can now accept a list of available tool schemas \n",
|
||||
"(descriptions of tools and their arguments) and generate a tool call message. \n",
|
||||
"This capability is known as **Tool Calling** or **Function Calling** and \n",
|
||||
"is becoming a popular pattern in building intelligent agent-based applications.\n",
|
||||
"Refer to the documentation from [OpenAI](https://platform.openai.com/docs/guides/function-calling) \n",
|
||||
"and [Anthropic](https://docs.anthropic.com/en/docs/build-with-claude/tool-use) for more information about tool calling in LLMs.\n",
|
||||
"\n",
|
||||
"To address this limitation, modern LLMs can now accept a list of available tool schemas (descriptions of tools and their arguments) and generate a tool call message. This capability is known as **Tool Calling** or **Function Calling** and is becoming a popular pattern in building intelligent agent-based applications.\n",
|
||||
"In AgentChat, the assistant agent can use tools to perform specific actions.\n",
|
||||
"The `web_search` tool is one such tool that allows the assistant agent to search the web for information.\n",
|
||||
"A custom tool can be a Python function or a subclass of the {py:class}`~autogen_core.tools.BaseTool`.\n",
|
||||
"\n",
|
||||
"For more information on tool calling, refer to the documentation from [OpenAI](https://platform.openai.com/docs/guides/function-calling) and [Anthropic](https://docs.anthropic.com/en/docs/build-with-claude/tool-use)."
|
||||
"### Langchain Tools\n",
|
||||
"\n",
|
||||
"In addition to custom tools, you can also use tools from the Langchain library\n",
|
||||
"by wrapping them in {py:class}`~autogen_ext.tools.langchain.LangChainToolAdapter`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"---------- assistant ----------\n",
|
||||
"[FunctionCall(id='call_BEYRkf53nBS1G2uG60wHP0zf', arguments='{\"query\":\"df[\\'Age\\'].mean()\"}', name='python_repl_ast')]\n",
|
||||
"[Prompt tokens: 111, Completion tokens: 22]\n",
|
||||
"---------- assistant ----------\n",
|
||||
"[FunctionExecutionResult(content='29.69911764705882', call_id='call_BEYRkf53nBS1G2uG60wHP0zf')]\n",
|
||||
"---------- assistant ----------\n",
|
||||
"29.69911764705882\n",
|
||||
"---------- Summary ----------\n",
|
||||
"Number of inner messages: 2\n",
|
||||
"Total prompt tokens: 111\n",
|
||||
"Total completion tokens: 22\n",
|
||||
"Duration: 0.62 seconds\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Response(chat_message=ToolCallSummaryMessage(source='assistant', models_usage=None, content='29.69911764705882', type='ToolCallSummaryMessage'), inner_messages=[ToolCallRequestEvent(source='assistant', models_usage=RequestUsage(prompt_tokens=111, completion_tokens=22), content=[FunctionCall(id='call_BEYRkf53nBS1G2uG60wHP0zf', arguments='{\"query\":\"df[\\'Age\\'].mean()\"}', name='python_repl_ast')], type='ToolCallRequestEvent'), ToolCallExecutionEvent(source='assistant', models_usage=None, content=[FunctionExecutionResult(content='29.69911764705882', call_id='call_BEYRkf53nBS1G2uG60wHP0zf')], type='ToolCallExecutionEvent')])"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import pandas as pd\n",
|
||||
"from autogen_ext.tools.langchain import LangChainToolAdapter\n",
|
||||
"from langchain_experimental.tools.python.tool import PythonAstREPLTool\n",
|
||||
"\n",
|
||||
"df = pd.read_csv(\"https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv\")\n",
|
||||
"tool = LangChainToolAdapter(PythonAstREPLTool(locals={\"df\": df}))\n",
|
||||
"model_client = OpenAIChatCompletionClient(model=\"gpt-4o\")\n",
|
||||
"agent = AssistantAgent(\n",
|
||||
" \"assistant\", tools=[tool], model_client=model_client, system_message=\"Use the `df` variable to access the dataset.\"\n",
|
||||
")\n",
|
||||
"await Console(\n",
|
||||
" agent.on_messages_stream(\n",
|
||||
" [TextMessage(content=\"What's the average age of the passengers?\", source=\"user\")], CancellationToken()\n",
|
||||
" )\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Using Model Context\n",
|
||||
"\n",
|
||||
"{py:class}`~autogen_agentchat.agents.AssistantAgent` has a `model_context`\n",
|
||||
"parameter that can be used to pass in a {py:class}`~autogen_core.model_context.ChatCompletionContext`\n",
|
||||
"object. This allows the agent to use different model contexts, such as\n",
|
||||
"{py:class}`~autogen_core.model_context.BufferedChatCompletionContext` to\n",
|
||||
"limit the context sent to the model.\n",
|
||||
"\n",
|
||||
"By default, {py:class}`~autogen_agentchat.agents.AssistantAgent` uses\n",
|
||||
"the {py:class}`~autogen_core.model_context.UnboundedChatCompletionContext`\n",
|
||||
"which sends the full conversation history to the model. To limit the context\n",
|
||||
"to the last `n` messages, you can use the {py:class}`~autogen_core.model_context.BufferedChatCompletionContext`."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from autogen_core.model_context import BufferedChatCompletionContext\n",
|
||||
"\n",
|
||||
"# Create an agent that uses only the last 5 messages in the context to generate responses.\n",
|
||||
"agent = AssistantAgent(\n",
|
||||
" name=\"assistant\",\n",
|
||||
" model_client=model_client,\n",
|
||||
" tools=[web_search],\n",
|
||||
" system_message=\"Use tools to solve tasks.\",\n",
|
||||
" model_context=BufferedChatCompletionContext(buffer_size=5), # Only use the last 5 messages in the context.\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## User Proxy Agent\n",
|
||||
"\n",
|
||||
"{py:class}`~autogen_agentchat.agents.UserProxyAgent` is a built-in agent that\n",
|
||||
"provides one way for a user to intervene in the process. This agent will put the team in a temporary blocking state, and thus any exceptions or runtime failures while in the blocked state will result in a deadlock. It is strongly advised that this agent be coupled with a timeout mechanic and that all errors and exceptions emanating from it are handled."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from autogen_agentchat.agents import UserProxyAgent\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"async def user_proxy_run() -> None:\n",
|
||||
" user_proxy_agent = UserProxyAgent(\"user_proxy\")\n",
|
||||
" response = await user_proxy_agent.on_messages(\n",
|
||||
" [TextMessage(content=\"What is your name? \", source=\"user\")], cancellation_token=CancellationToken()\n",
|
||||
" )\n",
|
||||
" print(f\"Your name is {response.chat_message.content}\")\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# Use asyncio.run(user_proxy_run()) when running in a script.\n",
|
||||
"await user_proxy_run()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The User Proxy agent is ideally used for on-demand human-in-the-loop interactions for scenarios such as Just In Time approvals, human feedback, alerts, etc. For slower user interactions, consider terminating a team using a termination condition and start another one from\n",
|
||||
"{py:meth}`~autogen_agentchat.base.TaskRunner.run` or {py:meth}`~autogen_agentchat.base.TaskRunner.run_stream` with another message."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
Reference in New Issue
Block a user