Improve agentchat tutorial (#4233)

This commit is contained in:
Eric Zhu
2024-11-16 09:01:38 -08:00
committed by GitHub
parent d213c1c061
commit 4aec53c36f
8 changed files with 434 additions and 424 deletions

View File

@@ -158,6 +158,28 @@
"For more information on tool calling, refer to the documentation from [OpenAI](https://platform.openai.com/docs/guides/function-calling) and [Anthropic](https://docs.anthropic.com/en/docs/build-with-claude/tool-use)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Other Preset Agents\n",
"\n",
"The following preset agents are available:\n",
"\n",
"- {py:class}`~autogen_agentchat.agents.CodeExecutorAgent`: An agent that can execute code.\n",
"- {py:class}`~autogen_ext.agents.MultimodalWebSurfer`: A multi-modal agent that can search the web and visit web pages for information."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next Step\n",
"\n",
"Now we have discussed how to use the {py:class}`~autogen_agentchat.agents.AssistantAgent`,\n",
"we can move on to the next section to learn how to use the teams feature of AgentChat."
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -186,246 +208,6 @@
")\n",
"print(result) -->"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## CodeExecutorAgent\n",
"\n",
"The {py:class}`~autogen_agentchat.agents.CodeExecutorAgent`\n",
"preset extracts and executes code snippets found in received messages and returns the output. It is typically used within a team with another agent that generates code snippets to be executed.\n",
"\n",
"```{note}\n",
"It is recommended that the {py:class}`~autogen_agentchat.agents.CodeExecutorAgent` agent\n",
"uses a Docker container to execute code. This ensures that model-generated code is executed in an isolated environment. To use Docker, your environment must have Docker installed and running. \n",
"Follow the installation instructions for [Docker](https://docs.docker.com/get-docker/).\n",
"```\n",
"\n",
"In this example, we show how to set up a {py:class}`~autogen_agentchat.agents.CodeExecutorAgent` agent that uses the \n",
"{py:class}`~autogen_ext.code_executors.DockerCommandLineCodeExecutor` \n",
"to execute code snippets in a Docker container. The `work_dir` parameter indicates where all executed files are first saved locally before being executed in the Docker container."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"source='code_executor' models_usage=None content='Hello world\\n'\n"
]
}
],
"source": [
"from autogen_agentchat.agents import CodeExecutorAgent\n",
"from autogen_ext.code_executors import DockerCommandLineCodeExecutor\n",
"\n",
"\n",
"async def run_code_executor_agent() -> None:\n",
" # Create a code executor agent that uses a Docker container to execute code.\n",
" code_executor = DockerCommandLineCodeExecutor(work_dir=\"coding\")\n",
" await code_executor.start()\n",
" code_executor_agent = CodeExecutorAgent(\"code_executor\", code_executor=code_executor)\n",
"\n",
" # Run the agent with a given code snippet.\n",
" task = TextMessage(\n",
" content=\"\"\"Here is some code\n",
"```python\n",
"print('Hello world')\n",
"```\n",
"\"\"\",\n",
" source=\"user\",\n",
" )\n",
" response = await code_executor_agent.on_messages([task], CancellationToken())\n",
" print(response.chat_message)\n",
"\n",
" # Stop the code executor.\n",
" await code_executor.stop()\n",
"\n",
"\n",
"# Use asyncio.run(run_code_executor_agent()) when running in a script.\n",
"await run_code_executor_agent()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This example shows the agent executing a code snippet that prints \"Hello world\".\n",
"The agent then returns the output of the code execution."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Build Your Own Agents\n",
"\n",
"You may have agents with behaviors that do not fall into a preset. \n",
"In such cases, you can build custom agents.\n",
"\n",
"All agents in AgentChat inherit from {py:class}`~autogen_agentchat.agents.BaseChatAgent` \n",
"class and implement the following abstract methods and attributes:\n",
"\n",
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages`: The abstract method that defines the behavior of the agent in response to messages. This method is called when the agent is asked to provide a response in {py:meth}`~autogen_agentchat.agents.BaseChatAgent.run`. It returns a {py:class}`~autogen_agentchat.base.Response` object.\n",
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_reset`: The abstract method that resets the agent to its initial state. This method is called when the agent is asked to reset itself.\n",
"- {py:attr}`~autogen_agentchat.agents.BaseChatAgent.produced_message_types`: The list of possible {py:class}`~autogen_agentchat.messages.ChatMessage` message types the agent can produce in its response.\n",
"\n",
"Optionally, you can implement the the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream` method to stream messages as they are generated by the agent. If this method is not implemented, the agent\n",
"uses the default implementation of {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream`\n",
"that calls the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` method and\n",
"yields all messages in the response."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### CounterDownAgent\n",
"\n",
"In this example, we create a simple agent that counts down from a given number to zero,\n",
"and produces a stream of messages with the current count."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"3...\n",
"2...\n",
"1...\n",
"Done!\n"
]
}
],
"source": [
"from typing import AsyncGenerator, List, Sequence\n",
"\n",
"from autogen_core.base import CancellationToken\n",
"from autogen_agentchat.agents import BaseChatAgent\n",
"from autogen_agentchat.base import Response\n",
"from autogen_agentchat.messages import AgentMessage, ChatMessage, TextMessage\n",
"\n",
"\n",
"class CountDownAgent(BaseChatAgent):\n",
" def __init__(self, name: str, count: int = 3):\n",
" super().__init__(name, \"A simple agent that counts down.\")\n",
" self._count = count\n",
"\n",
" @property\n",
" def produced_message_types(self) -> List[type[ChatMessage]]:\n",
" return [TextMessage]\n",
"\n",
" async def on_messages(self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken) -> Response:\n",
" # Calls the on_messages_stream.\n",
" response: Response | None = None\n",
" async for message in self.on_messages_stream(messages, cancellation_token):\n",
" if isinstance(message, Response):\n",
" response = message\n",
" assert response is not None\n",
" return response\n",
"\n",
" async def on_messages_stream(\n",
" self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken\n",
" ) -> AsyncGenerator[AgentMessage | Response, None]:\n",
" inner_messages: List[AgentMessage] = []\n",
" for i in range(self._count, 0, -1):\n",
" msg = TextMessage(content=f\"{i}...\", source=self.name)\n",
" inner_messages.append(msg)\n",
" yield msg\n",
" # The response is returned at the end of the stream.\n",
" # It contains the final message and all the inner messages.\n",
" yield Response(chat_message=TextMessage(content=\"Done!\", source=self.name), inner_messages=inner_messages)\n",
"\n",
" async def on_reset(self, cancellation_token: CancellationToken) -> None:\n",
" pass\n",
"\n",
"\n",
"async def run_countdown_agent() -> None:\n",
" # Create a countdown agent.\n",
" countdown_agent = CountDownAgent(\"countdown\")\n",
"\n",
" # Run the agent with a given task and stream the response.\n",
" async for message in countdown_agent.on_messages_stream([], CancellationToken()):\n",
" if isinstance(message, Response):\n",
" print(message.chat_message.content)\n",
" else:\n",
" print(message.content)\n",
"\n",
"\n",
"# Use asyncio.run(run_countdown_agent()) when running in a script.\n",
"await run_countdown_agent()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### UserProxyAgent \n",
"\n",
"A common use case for building a custom agent is to create an agent that acts as a proxy for the user.\n",
"\n",
"In the example below we show how to implement a `UserProxyAgent` - an agent that asks the user to enter\n",
"some text through console and then returns that message as a response."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"I am glad to be here.\n"
]
}
],
"source": [
"import asyncio\n",
"from typing import List, Sequence\n",
"\n",
"from autogen_core.base import CancellationToken\n",
"from autogen_agentchat.base import Response\n",
"from autogen_agentchat.agents import BaseChatAgent\n",
"from autogen_agentchat.messages import ChatMessage, TextMessage\n",
"\n",
"\n",
"class UserProxyAgent(BaseChatAgent):\n",
" def __init__(self, name: str) -> None:\n",
" super().__init__(name, \"A human user.\")\n",
"\n",
" @property\n",
" def produced_message_types(self) -> List[type[ChatMessage]]:\n",
" return [TextMessage]\n",
"\n",
" async def on_messages(self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken) -> Response:\n",
" user_input = await asyncio.get_event_loop().run_in_executor(None, input, \"Enter your response: \")\n",
" return Response(chat_message=TextMessage(content=user_input, source=self.name))\n",
"\n",
" async def on_reset(self, cancellation_token: CancellationToken) -> None:\n",
" pass\n",
"\n",
"\n",
"async def run_user_proxy_agent() -> None:\n",
" user_proxy_agent = UserProxyAgent(name=\"user_proxy_agent\")\n",
" response = await user_proxy_agent.on_messages([], CancellationToken())\n",
" print(response.chat_message.content)\n",
"\n",
"\n",
"# Use asyncio.run(run_user_proxy_agent()) when running in a script.\n",
"await run_user_proxy_agent()"
]
}
],
"metadata": {
@@ -444,7 +226,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.5"
"version": "3.11.5"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,166 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Custom Agents\n",
"\n",
"You may have agents with behaviors that do not fall into a preset. \n",
"In such cases, you can build custom agents.\n",
"\n",
"All agents in AgentChat inherit from {py:class}`~autogen_agentchat.agents.BaseChatAgent` \n",
"class and implement the following abstract methods and attributes:\n",
"\n",
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages`: The abstract method that defines the behavior of the agent in response to messages. This method is called when the agent is asked to provide a response in {py:meth}`~autogen_agentchat.agents.BaseChatAgent.run`. It returns a {py:class}`~autogen_agentchat.base.Response` object.\n",
"- {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_reset`: The abstract method that resets the agent to its initial state. This method is called when the agent is asked to reset itself.\n",
"- {py:attr}`~autogen_agentchat.agents.BaseChatAgent.produced_message_types`: The list of possible {py:class}`~autogen_agentchat.messages.ChatMessage` message types the agent can produce in its response.\n",
"\n",
"Optionally, you can implement the the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream` method to stream messages as they are generated by the agent. If this method is not implemented, the agent\n",
"uses the default implementation of {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages_stream`\n",
"that calls the {py:meth}`~autogen_agentchat.agents.BaseChatAgent.on_messages` method and\n",
"yields all messages in the response."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## CounterDownAgent\n",
"\n",
"In this example, we create a simple agent that counts down from a given number to zero,\n",
"and produces a stream of messages with the current count."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from typing import AsyncGenerator, List, Sequence\n",
"\n",
"from autogen_agentchat.agents import BaseChatAgent\n",
"from autogen_agentchat.base import Response\n",
"from autogen_agentchat.messages import AgentMessage, ChatMessage, TextMessage\n",
"from autogen_core.base import CancellationToken\n",
"\n",
"\n",
"class CountDownAgent(BaseChatAgent):\n",
" def __init__(self, name: str, count: int = 3):\n",
" super().__init__(name, \"A simple agent that counts down.\")\n",
" self._count = count\n",
"\n",
" @property\n",
" def produced_message_types(self) -> List[type[ChatMessage]]:\n",
" return [TextMessage]\n",
"\n",
" async def on_messages(self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken) -> Response:\n",
" # Calls the on_messages_stream.\n",
" response: Response | None = None\n",
" async for message in self.on_messages_stream(messages, cancellation_token):\n",
" if isinstance(message, Response):\n",
" response = message\n",
" assert response is not None\n",
" return response\n",
"\n",
" async def on_messages_stream(\n",
" self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken\n",
" ) -> AsyncGenerator[AgentMessage | Response, None]:\n",
" inner_messages: List[AgentMessage] = []\n",
" for i in range(self._count, 0, -1):\n",
" msg = TextMessage(content=f\"{i}...\", source=self.name)\n",
" inner_messages.append(msg)\n",
" yield msg\n",
" # The response is returned at the end of the stream.\n",
" # It contains the final message and all the inner messages.\n",
" yield Response(chat_message=TextMessage(content=\"Done!\", source=self.name), inner_messages=inner_messages)\n",
"\n",
" async def on_reset(self, cancellation_token: CancellationToken) -> None:\n",
" pass\n",
"\n",
"\n",
"async def run_countdown_agent() -> None:\n",
" # Create a countdown agent.\n",
" countdown_agent = CountDownAgent(\"countdown\")\n",
"\n",
" # Run the agent with a given task and stream the response.\n",
" async for message in countdown_agent.on_messages_stream([], CancellationToken()):\n",
" if isinstance(message, Response):\n",
" print(message.chat_message.content)\n",
" else:\n",
" print(message.content)\n",
"\n",
"\n",
"# Use asyncio.run(run_countdown_agent()) when running in a script.\n",
"await run_countdown_agent()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## UserProxyAgent \n",
"\n",
"A common use case for building a custom agent is to create an agent that acts as a proxy for the user.\n",
"\n",
"In the example below we show how to implement a `UserProxyAgent` - an agent that asks the user to enter\n",
"some text through console and then returns that message as a response."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import asyncio\n",
"from typing import List, Sequence\n",
"\n",
"from autogen_agentchat.agents import BaseChatAgent\n",
"from autogen_agentchat.base import Response\n",
"from autogen_agentchat.messages import ChatMessage\n",
"from autogen_core.base import CancellationToken\n",
"\n",
"\n",
"class UserProxyAgent(BaseChatAgent):\n",
" def __init__(self, name: str) -> None:\n",
" super().__init__(name, \"A human user.\")\n",
"\n",
" @property\n",
" def produced_message_types(self) -> List[type[ChatMessage]]:\n",
" return [TextMessage]\n",
"\n",
" async def on_messages(self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken) -> Response:\n",
" user_input = await asyncio.get_event_loop().run_in_executor(None, input, \"Enter your response: \")\n",
" return Response(chat_message=TextMessage(content=user_input, source=self.name))\n",
"\n",
" async def on_reset(self, cancellation_token: CancellationToken) -> None:\n",
" pass\n",
"\n",
"\n",
"async def run_user_proxy_agent() -> None:\n",
" user_proxy_agent = UserProxyAgent(name=\"user_proxy_agent\")\n",
" response = await user_proxy_agent.on_messages([], CancellationToken())\n",
" print(response.chat_message.content)\n",
"\n",
"\n",
"# Use asyncio.run(run_user_proxy_agent()) when running in a script.\n",
"await run_user_proxy_agent()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -46,6 +46,12 @@ A smart team that uses a model-based strategy and custom selector.
A dynamic team that uses handoffs to pass tasks between agents.
:::
:::{grid-item-card} {fas}`users;pst-color-primary` Custom Agents
:link: ./custom-agents.html
How to build custom agents.
:::
::::
```{toctree}
@@ -58,4 +64,5 @@ teams
selector-group-chat
swarm
termination
custom-agents
```

View File

@@ -1,181 +1,191 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Models\n",
"\n",
"In many cases, agents need access to model services such as OpenAI, Azure OpenAI, and local models.\n",
"AgentChat utilizes model clients provided by the\n",
"[`autogen-ext`](../../core-user-guide/framework/model-clients.ipynb) package."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## OpenAI\n",
"\n",
"To access OpenAI models, you need to install the `openai` extension to use the {py:class}`~autogen_ext.models.OpenAIChatCompletionClient`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"pip install 'autogen-ext[openai]==0.4.0.dev6'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You will also need to obtain an [API key](https://platform.openai.com/account/api-keys) from OpenAI."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"from autogen_ext.models import OpenAIChatCompletionClient\n",
"\n",
"opneai_model_client = OpenAIChatCompletionClient(\n",
" model=\"gpt-4o-2024-08-06\",\n",
" # api_key=\"sk-...\", # Optional if you have an OPENAI_API_KEY environment variable set.\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To test the model client, you can use the following code:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CreateResult(finish_reason='stop', content='The capital of France is Paris.', usage=RequestUsage(prompt_tokens=15, completion_tokens=7), cached=False, logprobs=None)\n"
]
}
],
"source": [
"from autogen_core.components.models import UserMessage\n",
"\n",
"result = await opneai_model_client.create([UserMessage(content=\"What is the capital of France?\", source=\"user\")])\n",
"print(result)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Azure OpenAI\n",
"\n",
"Install the `azure` and `openai` extensions to use the {py:class}`~autogen_ext.models.AzureOpenAIChatCompletionClient`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"pip install 'autogen-ext[openai,azure]==0.4.0.dev6'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use the client, you need to provide your deployment id, Azure Cognitive Services endpoint, api version, and model capabilities.\n",
"For authentication, you can either provide an API key or an Azure Active Directory (AAD) token credential.\n",
"\n",
"The following code snippet shows how to use AAD authentication.\n",
"The identity used must be assigned the [Cognitive Services OpenAI User](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-user) role."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from autogen_ext.models import AzureOpenAIChatCompletionClient\n",
"from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n",
"\n",
"# Create the token provider\n",
"token_provider = get_bearer_token_provider(DefaultAzureCredential(), \"https://cognitiveservices.azure.com/.default\")\n",
"\n",
"az_model_client = AzureOpenAIChatCompletionClient(\n",
" model=\"{your-azure-deployment}\",\n",
" api_version=\"2024-06-01\",\n",
" azure_endpoint=\"https://{your-custom-endpoint}.openai.azure.com/\",\n",
" azure_ad_token_provider=token_provider, # Optional if you choose key-based authentication.\n",
" # api_key=\"sk-...\", # For key-based authentication.\n",
" model_capabilities={\n",
" \"vision\": True,\n",
" \"function_calling\": True,\n",
" \"json_output\": True,\n",
" },\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"See [here](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/managed-identity#chat-completions) for how to use the Azure client directly or for more info."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Local Models\n",
"\n",
"We are working on it. Stay tuned!"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Models\n",
"\n",
"In many cases, agents need access to model services such as OpenAI, Azure OpenAI, and local models.\n",
"AgentChat utilizes model clients provided by the\n",
"[`autogen-ext`](../../core-user-guide/framework/model-clients.ipynb) package."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## OpenAI\n",
"\n",
"To access OpenAI models, you need to install the `openai` extension to use the {py:class}`~autogen_ext.models.OpenAIChatCompletionClient`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"pip install 'autogen-ext[openai]==0.4.0.dev6'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You will also need to obtain an [API key](https://platform.openai.com/account/api-keys) from OpenAI."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"from autogen_ext.models import OpenAIChatCompletionClient\n",
"\n",
"opneai_model_client = OpenAIChatCompletionClient(\n",
" model=\"gpt-4o-2024-08-06\",\n",
" # api_key=\"sk-...\", # Optional if you have an OPENAI_API_KEY environment variable set.\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To test the model client, you can use the following code:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CreateResult(finish_reason='stop', content='The capital of France is Paris.', usage=RequestUsage(prompt_tokens=15, completion_tokens=7), cached=False, logprobs=None)\n"
]
}
],
"source": [
"from autogen_core.components.models import UserMessage\n",
"\n",
"result = await opneai_model_client.create([UserMessage(content=\"What is the capital of France?\", source=\"user\")])\n",
"print(result)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```{note}\n",
"You can use this client with models hosted on OpenAI-compatible endpoints, however, we have not tested this functionality.\n",
"See {py:class}`~autogen_ext.models.OpenAIChatCompletionClient` for more information.\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Azure OpenAI\n",
"\n",
"Install the `azure` and `openai` extensions to use the {py:class}`~autogen_ext.models.AzureOpenAIChatCompletionClient`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"pip install 'autogen-ext[openai,azure]==0.4.0.dev6'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use the client, you need to provide your deployment id, Azure Cognitive Services endpoint, api version, and model capabilities.\n",
"For authentication, you can either provide an API key or an Azure Active Directory (AAD) token credential.\n",
"\n",
"The following code snippet shows how to use AAD authentication.\n",
"The identity used must be assigned the [Cognitive Services OpenAI User](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-user) role."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from autogen_ext.models import AzureOpenAIChatCompletionClient\n",
"from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n",
"\n",
"# Create the token provider\n",
"token_provider = get_bearer_token_provider(DefaultAzureCredential(), \"https://cognitiveservices.azure.com/.default\")\n",
"\n",
"az_model_client = AzureOpenAIChatCompletionClient(\n",
" model=\"{your-azure-deployment}\",\n",
" api_version=\"2024-06-01\",\n",
" azure_endpoint=\"https://{your-custom-endpoint}.openai.azure.com/\",\n",
" azure_ad_token_provider=token_provider, # Optional if you choose key-based authentication.\n",
" # api_key=\"sk-...\", # For key-based authentication.\n",
" model_capabilities={\n",
" \"vision\": True,\n",
" \"function_calling\": True,\n",
" \"json_output\": True,\n",
" },\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"See [here](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/managed-identity#chat-completions) for how to use the Azure client directly or for more info."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Local Models\n",
"\n",
"We are working on it. Stay tuned!"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -281,7 +281,7 @@
"outputs": [],
"source": [
"from autogen_agentchat.agents import AssistantAgent\n",
"from autogen_agentchat.task import MaxMessageTermination, TextMentionTermination, Console\n",
"from autogen_agentchat.task import Console, MaxMessageTermination, TextMentionTermination\n",
"from autogen_agentchat.teams import RoundRobinGroupChat\n",
"from autogen_ext.models import OpenAIChatCompletionClient\n",
"\n",

View File

@@ -1,7 +1,7 @@
import json
from abc import ABC, abstractmethod
from collections.abc import Sequence
from typing import Any, Dict, Generic, Mapping, Protocol, Type, TypedDict, TypeVar, runtime_checkable, cast
from typing import Any, Dict, Generic, Mapping, Protocol, Type, TypedDict, TypeVar, cast, runtime_checkable
import jsonref
from pydantic import BaseModel

View File

@@ -1,12 +1,12 @@
import inspect
from typing import Annotated, List
from autogen_core.components.tools._base import ToolSchema
import pytest
from autogen_core.base import CancellationToken
from autogen_core.components._function_utils import get_typed_signature
from autogen_core.components.models._openai_client import convert_tools
from autogen_core.components.tools import BaseTool, FunctionTool
from autogen_core.components.tools._base import ToolSchema
from pydantic import BaseModel, Field, model_serializer
from pydantic_core import PydanticUndefined