mirror of
https://github.com/microsoft/autogen.git
synced 2026-01-27 02:08:08 -05:00
Update notebook contrib guidance, update a few notebooks for site (#1651)
* update some notebooks * Update contributing.md * remove os --------- Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
This commit is contained in:
@@ -28,8 +28,9 @@
|
||||
"- [Example 5: Solve comprehensive QA problems with RetrieveChat's unique feature `Update Context`](#example-5)\n",
|
||||
"- [Example 6: Solve comprehensive QA problems with customized prompt and few-shot learning](#example-6)\n",
|
||||
"\n",
|
||||
"\\:\\:\\:info Requirements\n",
|
||||
"\n",
|
||||
"````{=mdx}\n",
|
||||
":::info Requirements\n",
|
||||
"Some extra dependencies are needed for this notebook, which can be installed via pip:\n",
|
||||
"\n",
|
||||
"```bash\n",
|
||||
@@ -37,8 +38,8 @@
|
||||
"```\n",
|
||||
"\n",
|
||||
"For more information, please refer to the [installation guide](/docs/installation/).\n",
|
||||
"\n",
|
||||
"\\:\\:\\:\n"
|
||||
":::\n",
|
||||
"````"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -78,19 +79,7 @@
|
||||
"# a vector database instance\n",
|
||||
"from autogen.retrieve_utils import TEXT_FORMATS\n",
|
||||
"\n",
|
||||
"config_list = autogen.config_list_from_json(\n",
|
||||
" env_or_file=\"OAI_CONFIG_LIST\",\n",
|
||||
" filter_dict={\n",
|
||||
" \"model\": {\n",
|
||||
" \"gpt-4\",\n",
|
||||
" \"gpt4\",\n",
|
||||
" \"gpt-4-32k\",\n",
|
||||
" \"gpt-4-32k-0314\",\n",
|
||||
" \"gpt-35-turbo\",\n",
|
||||
" \"gpt-3.5-turbo\",\n",
|
||||
" }\n",
|
||||
" },\n",
|
||||
")\n",
|
||||
"config_list = autogen.config_list_from_json(env_or_file=\"OAI_CONFIG_LIST\")\n",
|
||||
"\n",
|
||||
"assert len(config_list) > 0\n",
|
||||
"print(\"models to use: \", [config_list[i][\"model\"] for i in range(len(config_list))])"
|
||||
@@ -101,18 +90,12 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\\:\\:\\:tip\n",
|
||||
"````{=mdx}\n",
|
||||
":::tip\n",
|
||||
"Learn more about configuring LLMs for agents [here](/docs/llm_configuration).\n",
|
||||
":::\n",
|
||||
"````\n",
|
||||
"\n",
|
||||
"Learn more about the various ways to configure LLM endpoints [here](/docs/llm_configuration).\n",
|
||||
"\n",
|
||||
"\\:\\:\\:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Construct agents for RetrieveChat\n",
|
||||
"\n",
|
||||
"We start by initializing the `RetrieveAssistantAgent` and `RetrieveUserProxyAgent`. The system message needs to be set to \"You are a helpful assistant.\" for RetrieveAssistantAgent. The detailed instructions are given in the user message. Later we will use the `RetrieveUserProxyAgent.generate_init_prompt` to combine the instructions and a retrieval augmented generation task for an initial prompt to be sent to the LLM assistant."
|
||||
|
||||
@@ -21,16 +21,16 @@
|
||||
"\n",
|
||||
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to write code and execute the code. Here `AssistantAgent` is an LLM-based agent that can write Python code (in a Python coding block) for a user to execute for a given task. `UserProxyAgent` is an agent which serves as a proxy for the human user to execute the code written by `AssistantAgent`, or automatically execute the code. Depending on the setting of `human_input_mode` and `max_consecutive_auto_reply`, the `UserProxyAgent` either solicits feedback from the human user or returns auto-feedback based on the result of code execution (success or failure and corresponding outputs) to `AssistantAgent`. `AssistantAgent` will debug the code and suggest new code if the result contains error. The two agents keep communicating to each other until the task is done.\n",
|
||||
"\n",
|
||||
"\\:\\:\\:info Requirements\n",
|
||||
"\n",
|
||||
"````{=mdx}\n",
|
||||
":::info Requirements\n",
|
||||
"Install `pyautogen`:\n",
|
||||
"```bash\n",
|
||||
"pip install pyautogen\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"For more information, please refer to the [installation guide](/docs/installation/).\n",
|
||||
"\n",
|
||||
"\\:\\:\\:"
|
||||
":::\n",
|
||||
"````"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -59,11 +59,11 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\\:\\:\\:tip\n",
|
||||
"\n",
|
||||
"Learn more about the various ways to configure LLM endpoints [here](/docs/llm_configuration).\n",
|
||||
"\n",
|
||||
"\\:\\:\\:"
|
||||
"````{=mdx}\n",
|
||||
":::tip\n",
|
||||
"Learn more about configuring LLMs for agents [here](/docs/llm_configuration).\n",
|
||||
":::\n",
|
||||
"````"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -24,16 +24,16 @@
|
||||
"\n",
|
||||
"In this notebook, we demonstrate how to use `AssistantAgent` and `UserProxyAgent` to make function calls with the new feature of OpenAI models (in model version 0613). A specified prompt and function configs must be passed to `AssistantAgent` to initialize the agent. The corresponding functions must be passed to `UserProxyAgent`, which will execute any function calls made by `AssistantAgent`. Besides this requirement of matching descriptions with functions, we recommend checking the system message in the `AssistantAgent` to ensure the instructions align with the function call descriptions.\n",
|
||||
"\n",
|
||||
"\\:\\:\\:info Requirements\n",
|
||||
"\n",
|
||||
"````{=mdx}\n",
|
||||
":::info Requirements\n",
|
||||
"Install `pyautogen`:\n",
|
||||
"```bash\n",
|
||||
"pip install pyautogen\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"For more information, please refer to the [installation guide](/docs/installation/).\n",
|
||||
"\n",
|
||||
"\\:\\:\\:\n"
|
||||
":::\n",
|
||||
"````\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -53,25 +53,18 @@
|
||||
"config_list = autogen.config_list_from_json(env_or_file=\"OAI_CONFIG_LIST\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "92fde41f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\\:\\:\\:tip\n",
|
||||
"\n",
|
||||
"Learn more about the various ways to configure LLM endpoints [here](/docs/llm_configuration).\n",
|
||||
"\n",
|
||||
"\\:\\:\\:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "2b9526e7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"````{=mdx}\n",
|
||||
":::tip\n",
|
||||
"Learn more about configuring LLMs for agents [here](/docs/llm_configuration).\n",
|
||||
":::\n",
|
||||
"````\n",
|
||||
"\n",
|
||||
"## Making Async and Sync Function Calls\n",
|
||||
"\n",
|
||||
"In this example, we demonstrate function call execution with `AssistantAgent` and `UserProxyAgent`. With the default system prompt of `AssistantAgent`, we allow the LLM assistant to perform tasks with code, and the `UserProxyAgent` would extract code blocks from the LLM response and execute them. With the new \"function_call\" feature, we define functions and specify the description of the function in the OpenAI config for the `AssistantAgent`. Then we register the functions in `UserProxyAgent`."
|
||||
|
||||
@@ -5,37 +5,29 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a href=\"https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/agentchat_groupchat.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Auto Generated Agent Chat: Group Chat\n",
|
||||
"<!--\n",
|
||||
"tags: [\"orchestration\", \"group chat\"]\n",
|
||||
"description: |\n",
|
||||
" Explore the utilization of large language models in automated group chat scenarios, where agents perform tasks collectively, demonstrating how they can be configured, interact with each other, and retrieve specific information from external resources.\n",
|
||||
"-->\n",
|
||||
"\n",
|
||||
"# Group Chat\n",
|
||||
"\n",
|
||||
"AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
|
||||
"Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n",
|
||||
"\n",
|
||||
"This notebook is modified based on https://github.com/microsoft/FLAML/blob/4ea686af5c3e8ff24d9076a7a626c8b28ab5b1d7/notebook/autogen_multiagent_roleplay_chat.ipynb\n",
|
||||
"\n",
|
||||
"## Requirements\n",
|
||||
"\n",
|
||||
"AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n",
|
||||
"````{=mdx}\n",
|
||||
":::info Requirements\n",
|
||||
"Install `pyautogen`:\n",
|
||||
"```bash\n",
|
||||
"pip install pyautogen\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 105,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%capture --no-stderr\n",
|
||||
"# %pip install \"pyautogen>=0.2.3\""
|
||||
"```\n",
|
||||
"\n",
|
||||
"For more information, please refer to the [installation guide](/docs/installation/).\n",
|
||||
":::\n",
|
||||
"````"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -56,24 +48,12 @@
|
||||
"source": [
|
||||
"import autogen\n",
|
||||
"\n",
|
||||
"config_list_gpt4 = autogen.config_list_from_json(\n",
|
||||
"config_list = autogen.config_list_from_json(\n",
|
||||
" \"OAI_CONFIG_LIST\",\n",
|
||||
" filter_dict={\n",
|
||||
" \"model\": [\"gpt-4\", \"gpt-4-0314\", \"gpt4\", \"gpt-4-32k\", \"gpt-4-32k-0314\", \"gpt-4-32k-v0314\"],\n",
|
||||
" },\n",
|
||||
")\n",
|
||||
"# config_list_gpt35 = autogen.config_list_from_json(\n",
|
||||
"# \"OAI_CONFIG_LIST\",\n",
|
||||
"# filter_dict={\n",
|
||||
"# \"model\": {\n",
|
||||
"# \"gpt-3.5-turbo\",\n",
|
||||
"# \"gpt-3.5-turbo-16k\",\n",
|
||||
"# \"gpt-3.5-turbo-0301\",\n",
|
||||
"# \"chatgpt-35-turbo-0301\",\n",
|
||||
"# \"gpt-35-turbo-v0301\",\n",
|
||||
"# },\n",
|
||||
"# },\n",
|
||||
"# )"
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -81,40 +61,12 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well). Only the gpt-4 models are kept in the list based on the filter condition.\n",
|
||||
"````{=mdx}\n",
|
||||
":::tip\n",
|
||||
"Learn more about configuring LLMs for agents [here](/docs/llm_configuration).\n",
|
||||
":::\n",
|
||||
"````\n",
|
||||
"\n",
|
||||
"The config list looks like the following:\n",
|
||||
"```python\n",
|
||||
"config_list = [\n",
|
||||
" {\n",
|
||||
" 'model': 'gpt-4',\n",
|
||||
" 'api_key': '<your OpenAI API key here>',\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" 'model': 'gpt-4',\n",
|
||||
" 'api_key': '<your Azure OpenAI API key here>',\n",
|
||||
" 'base_url': '<your Azure OpenAI API base here>',\n",
|
||||
" 'api_type': 'azure',\n",
|
||||
" 'api_version': '2023-06-01-preview',\n",
|
||||
" },\n",
|
||||
" {\n",
|
||||
" 'model': 'gpt-4-32k',\n",
|
||||
" 'api_key': '<your Azure OpenAI API key here>',\n",
|
||||
" 'base_url': '<your Azure OpenAI API base here>',\n",
|
||||
" 'api_type': 'azure',\n",
|
||||
" 'api_version': '2023-06-01-preview',\n",
|
||||
" },\n",
|
||||
"]\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/llm_configuration.ipynb) for full code examples of the different methods."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Construct Agents"
|
||||
]
|
||||
},
|
||||
@@ -124,7 +76,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm_config = {\"config_list\": config_list_gpt4, \"cache_seed\": 42}\n",
|
||||
"llm_config = {\"config_list\": config_list, \"cache_seed\": 42}\n",
|
||||
"user_proxy = autogen.UserProxyAgent(\n",
|
||||
" name=\"User_proxy\",\n",
|
||||
" system_message=\"A human admin.\",\n",
|
||||
|
||||
@@ -5,7 +5,11 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<a href=\"https://colab.research.google.com/github/microsoft/autogen/blob/main/notebook/agentchat_groupchat_RAG.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
||||
"<!--\n",
|
||||
"tags: [\"group chat\", \"orchestration\", \"RAG\"]\n",
|
||||
"description: |\n",
|
||||
" Implement and manage a multi-agent chat system using AutoGen, where AI assistants retrieve information, generate code, and interact collaboratively to solve complex tasks, especially in areas not covered by their training data.\n",
|
||||
"-->"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -13,27 +17,22 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Auto Generated Agent Chat: Group Chat with Retrieval Augmented Generation\n",
|
||||
"# Group Chat with Retrieval Augmented Generation\n",
|
||||
"\n",
|
||||
"AutoGen supports conversable agents powered by LLMs, tools, or humans, performing tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation.\n",
|
||||
"Please find documentation about this feature [here](https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat).\n",
|
||||
"\n",
|
||||
"## Requirements\n",
|
||||
"````{=mdx}\n",
|
||||
":::info Requirements\n",
|
||||
"Some extra dependencies are needed for this notebook, which can be installed via pip:\n",
|
||||
"\n",
|
||||
"AutoGen requires `Python>=3.8`. To run this notebook example, please install:\n",
|
||||
"```bash\n",
|
||||
"pip install \"pyautogen[retrievechat]>=0.2.3\"\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%%capture --no-stderr\n",
|
||||
"# %pip install \"pyautogen[retrievechat]>=0.2.3\""
|
||||
"pip install pyautogen[retrievechat]\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"For more information, please refer to the [installation guide](/docs/installation/).\n",
|
||||
":::\n",
|
||||
"````"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -66,13 +65,7 @@
|
||||
"from autogen import AssistantAgent\n",
|
||||
"from autogen.agentchat.contrib.retrieve_user_proxy_agent import RetrieveUserProxyAgent\n",
|
||||
"\n",
|
||||
"config_list = autogen.config_list_from_json(\n",
|
||||
" \"OAI_CONFIG_LIST\",\n",
|
||||
" file_location=\".\",\n",
|
||||
" filter_dict={\n",
|
||||
" \"model\": [\"gpt-3.5-turbo\", \"gpt-35-turbo\", \"gpt-35-turbo-0613\", \"gpt-4\", \"gpt4\", \"gpt-4-32k\"],\n",
|
||||
" },\n",
|
||||
")\n",
|
||||
"config_list = autogen.config_list_from_json(\"OAI_CONFIG_LIST\")\n",
|
||||
"\n",
|
||||
"print(\"LLM models: \", [config_list[i][\"model\"] for i in range(len(config_list))])"
|
||||
]
|
||||
@@ -82,33 +75,12 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well).\n",
|
||||
"````{=mdx}\n",
|
||||
":::tip\n",
|
||||
"Learn more about configuring LLMs for agents [here](/docs/llm_configuration).\n",
|
||||
":::\n",
|
||||
"````\n",
|
||||
"\n",
|
||||
"The config list looks like the following:\n",
|
||||
"```python\n",
|
||||
"config_list = [\n",
|
||||
" {\n",
|
||||
" \"model\": \"gpt-4\",\n",
|
||||
" \"api_key\": \"<your OpenAI API key>\",\n",
|
||||
" }, # OpenAI API endpoint for gpt-4\n",
|
||||
" {\n",
|
||||
" \"model\": \"gpt-35-turbo-0631\", # 0631 or newer is needed to use functions\n",
|
||||
" \"base_url\": \"<your Azure OpenAI API base>\", \n",
|
||||
" \"api_type\": \"azure\", \n",
|
||||
" \"api_version\": \"2023-08-01-preview\", # 2023-07-01-preview or newer is needed to use functions\n",
|
||||
" \"api_key\": \"<your Azure OpenAI API key>\"\n",
|
||||
" }\n",
|
||||
"]\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"You can set the value of config_list in any way you prefer. Please refer to this [notebook](https://github.com/microsoft/autogen/blob/main/website/docs/llm_configuration.ipynb) for full code examples of the different methods."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Construct Agents"
|
||||
]
|
||||
},
|
||||
@@ -819,7 +791,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.13"
|
||||
"version": "3.11.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -5,6 +5,12 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<!--\n",
|
||||
"tags: [\"orchestration\"]\n",
|
||||
"description: |\n",
|
||||
" Explore the demonstration of the SocietyOfMindAgent in the AutoGen library, which runs a group chat as an internal monologue, but appears to the external world as a single agent, offering a structured way to manage complex interactions among multiple agents and handle issues such as extracting responses from complex dialogues and dealing with context window constraints.\n",
|
||||
"-->\n",
|
||||
"\n",
|
||||
"# SocietyOfMindAgent\n",
|
||||
"\n",
|
||||
"This notebook demonstrates the SocietyOfMindAgent, which runs a group chat as an internal monologue, but appears to the external world as a single agent. This confers three distinct advantages:\n",
|
||||
@@ -12,47 +18,17 @@
|
||||
"1. It provides a clean way of producing a hierarchy of agents, hiding complexity as inner monologues.\n",
|
||||
"2. It provides a consistent way of extracting an answer from a lengthy group chat (normally, it is not clear which message is the final response, and the response itself may not always be formatted in a way that makes sense when extracted as a standalone message).\n",
|
||||
"3. It provides a way of recovering when agents exceed their context window constraints (the inner monologue is protected by try-catch blocks)\n",
|
||||
" \n",
|
||||
"\n",
|
||||
"## Requirements\n",
|
||||
"\n",
|
||||
"AutoGen requires `Python>=3.8`. To run this notebook example, please install the latest version of AutoGen:\n",
|
||||
"```sh\n",
|
||||
"````{=mdx}\n",
|
||||
":::info Requirements\n",
|
||||
"Install `pyautogen`:\n",
|
||||
"```bash\n",
|
||||
"pip install pyautogen\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# %pip install --quiet pyautogen"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Set your API Endpoint\n",
|
||||
"\n",
|
||||
"The [`config_list_from_json`](https://microsoft.github.io/autogen/docs/reference/oai/openai_utils#config_list_from_json) function loads a list of configurations from an environment variable or a json file.\n",
|
||||
"\n",
|
||||
"It first looks for environment variable \"OAI_CONFIG_LIST\" which needs to be a valid json string. If that variable is not found, it then looks for a json file named \"OAI_CONFIG_LIST\". It filters the configs by models (you can filter by other keys as well).\n",
|
||||
"\n",
|
||||
"Your json config should look something like the following:\n",
|
||||
"```json\n",
|
||||
"[\n",
|
||||
" {\n",
|
||||
" \"model\": \"gpt-4\",\n",
|
||||
" \"api_key\": \"<your OpenAI API key here>\"\n",
|
||||
" }\n",
|
||||
"]\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"If you open this notebook in colab, you can upload your files by clicking the file icon on the left panel and then choose \"upload file\" icon.\n"
|
||||
"For more information, please refer to the [installation guide](/docs/installation/).\n",
|
||||
":::\n",
|
||||
"````"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -79,6 +55,12 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"````{=mdx}\n",
|
||||
":::tip\n",
|
||||
"Learn more about configuring LLMs for agents [here](/docs/llm_configuration).\n",
|
||||
":::\n",
|
||||
"````\n",
|
||||
"\n",
|
||||
"### Example Group Chat with Two Agents\n",
|
||||
"\n",
|
||||
"In this example, we will use an AssistantAgent and a UserProxy agent (configured for code execution) to work together to solve a problem. Executing code requires *at least* two conversation turns (one to write the code, and one to execute the code). If the code fails, or needs further refinement, then additional turns may also be needed. When will then wrap these agents in a SocietyOfMindAgent, hiding the internal discussion from other agents (though will still appear in the console), and ensuring that the response is suitable as a standalone message."
|
||||
|
||||
@@ -26,22 +26,40 @@ The following points are best practices for authoring notebooks to ensure consis
|
||||
|
||||
You don't need to explain in depth how to install AutoGen. Unless there are specific instructions for the notebook just use the following markdown snippet:
|
||||
|
||||
````
|
||||
\:\:\:info Requirements
|
||||
|
||||
``````
|
||||
````{=mdx}
|
||||
:::info Requirements
|
||||
Install `pyautogen`:
|
||||
```bash
|
||||
pip install pyautogen
|
||||
```
|
||||
|
||||
For more information, please refer to the [installation guide](/docs/installation/).
|
||||
|
||||
\:\:\:
|
||||
:::
|
||||
````
|
||||
``````
|
||||
|
||||
Or if extras are needed:
|
||||
|
||||
``````
|
||||
````{=mdx}
|
||||
:::info Requirements
|
||||
Some extra dependencies are needed for this notebook, which can be installed via pip:
|
||||
|
||||
```bash
|
||||
pip install pyautogen[retrievechat] flaml[automl]
|
||||
```
|
||||
|
||||
For more information, please refer to the [installation guide](/docs/installation/).
|
||||
:::
|
||||
````
|
||||
``````
|
||||
|
||||
When specifying the config list, to ensure consistency it is best to use approximately the following code:
|
||||
|
||||
```python
|
||||
import autogen
|
||||
|
||||
config_list = autogen.config_list_from_json(
|
||||
env_or_file="OAI_CONFIG_LIST",
|
||||
)
|
||||
@@ -49,10 +67,10 @@ config_list = autogen.config_list_from_json(
|
||||
|
||||
Then after the code cell where this is used, include the following markdown snippet:
|
||||
|
||||
```
|
||||
\:\:\:tip
|
||||
|
||||
Learn more about the various ways to configure LLM endpoints [here](/docs/llm_configuration).
|
||||
|
||||
\:\:\:
|
||||
```
|
||||
``````
|
||||
````{=mdx}
|
||||
:::tip
|
||||
Learn more about configuring LLMs for agents [here](/docs/llm_configuration).
|
||||
:::
|
||||
````
|
||||
``````
|
||||
|
||||
Reference in New Issue
Block a user