mirror of
https://github.com/microsoft/autogen.git
synced 2026-04-20 03:02:16 -04:00
Update Docs; Update examples to allow Azure OpenAI setup (#154)
* Update Docs; Update examples to allow Azure OpenAI setup * update
This commit is contained in:
@@ -8,6 +8,8 @@ This directory contains examples and demos of how to use AGNext.
|
||||
- `patterns`: Contains examples that illustrate how multi-agent patterns can be implemented in AGNext.
|
||||
- `demos`: Contains interactive demos that showcase applications that can be built using AGNext.
|
||||
|
||||
See [Running the examples](#running-the-examples) for instructions on how to run the examples.
|
||||
|
||||
## Core examples
|
||||
|
||||
We provide examples to illustrate the core concepts of AGNext: agents, runtime, and message passing.
|
||||
@@ -28,13 +30,11 @@ We provide examples to illustrate how to use tools in AGNext:
|
||||
|
||||
We provide examples to illustrate how multi-agent patterns can be implemented in AGNext:
|
||||
|
||||
- [`coder_executor_pub_sub.py`](patterns/coder_executor_pub_sub.py): An example of how to create a coder-executor reflection pattern using broadcast communication. This example creates a plot of stock prices using the Yahoo Finance API.
|
||||
- [`coder_reviewer_direct.py`](patterns/coder_reviewer_direct.py): An example of how to create a coder-reviewer reflection pattern using direct communication.
|
||||
- [`coder_reviewer_pub_sub.py`](patterns/coder_reviewer_pub_sub.py): An example of how to create a coder-reviewer reflection pattern using broadcast communication.
|
||||
- [`group_chat_pub_sub.py`](patterns/group_chat_pub_sub.py): An example of how to create a round-robin group chat among three agents using broadcast communication.
|
||||
- [`mixture_of_agents_direct.py`](patterns/mixture_of_agents_direct.py): An example of how to create a [mixture of agents](https://github.com/togethercomputer/moa) using direct communication.
|
||||
- [`mixture_of_agents_pub_sub.py`](patterns/mixture_of_agents_pub_sub.py): An example of how to create a [mixture of agents](https://github.com/togethercomputer/moa) using broadcast communication.
|
||||
- [`multi_agent_debate_pub_sub.py`](patterns/multi_agent_debate_pub_sub.py): An example of how to create a [sparse multi-agent debate](https://arxiv.org/abs/2406.11776) pattern using broadcast communication.
|
||||
- [`coder_executor.py`](patterns/coder_executor.py): An example of how to create a coder-executor reflection pattern. This example creates a plot of stock prices using the Yahoo Finance API.
|
||||
- [`coder_reviewer.py`](patterns/coder_reviewer.py): An example of how to create a coder-reviewer reflection pattern.
|
||||
- [`group_chat.py`](patterns/group_chat.py): An example of how to create a round-robin group chat among three agents.
|
||||
- [`mixture_of_agents.py`](patterns/mixture_of_agents.py): An example of how to create a [mixture of agents](https://github.com/togethercomputer/moa).
|
||||
- [`multi_agent_debate.py`](patterns/multi_agent_debate.py): An example of how to create a [sparse multi-agent debate](https://arxiv.org/abs/2406.11776) pattern.
|
||||
|
||||
## Demos
|
||||
|
||||
@@ -50,14 +50,40 @@ We provide interactive demos that showcase applications that can be built using
|
||||
the group chat pattern.
|
||||
- [`chest_game.py`](demos/chess_game.py): an example with two chess player agents that executes its own tools to demonstrate tool use and reflection on tool use.
|
||||
|
||||
## Running the examples and demos
|
||||
## Running the examples
|
||||
|
||||
First, you need a shell with AGNext and the examples dependencies installed. To do this, run:
|
||||
### Prerequisites
|
||||
|
||||
First, you need a shell with AGNext and the examples dependencies installed.
|
||||
To do this, in the example directory, run:
|
||||
|
||||
```bash
|
||||
hatch shell
|
||||
```
|
||||
|
||||
Then, you need to set the `OPENAI_API_KEY` environment variable to your OpenAI API key.
|
||||
|
||||
```bash
|
||||
export OPENAI_API_KEY=your_openai_api_key
|
||||
```
|
||||
|
||||
For Azure OpenAI API, you need to set the following environment variables:
|
||||
|
||||
```bash
|
||||
export AZURE_OPENAI_API_KEY=your_azure_openai_api_key
|
||||
export AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint
|
||||
```
|
||||
|
||||
By default, OpenAI API is used.
|
||||
To use Azure OpenAI API, set the `OPENAI_API_TYPE`
|
||||
environment variable to `azure`.
|
||||
|
||||
```bash
|
||||
export OPENAI_API_TYPE=azure
|
||||
```
|
||||
|
||||
### Running
|
||||
|
||||
To run an example, just run the corresponding Python script. For example:
|
||||
|
||||
```bash
|
||||
|
||||
@@ -1,10 +1,14 @@
|
||||
from typing import List, Optional, Union
|
||||
import os
|
||||
from typing import Any, List, Optional, Union
|
||||
|
||||
from agnext.components.models import (
|
||||
AssistantMessage,
|
||||
AzureOpenAIChatCompletionClient,
|
||||
ChatCompletionClient,
|
||||
FunctionExecutionResult,
|
||||
FunctionExecutionResultMessage,
|
||||
LLMMessage,
|
||||
OpenAIChatCompletionClient,
|
||||
UserMessage,
|
||||
)
|
||||
from typing_extensions import Literal
|
||||
@@ -96,3 +100,28 @@ def convert_messages_to_llm_messages(
|
||||
raise AssertionError("unreachable")
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def get_chat_completion_client_from_envs(**kwargs: Any) -> ChatCompletionClient:
|
||||
# Check API type.
|
||||
api_type = os.getenv("OPENAI_API_TYPE", "openai")
|
||||
if api_type == "openai":
|
||||
# Check API key.
|
||||
api_key = os.getenv("OPENAI_API_KEY")
|
||||
if api_key is None:
|
||||
raise ValueError("OPENAI_API_KEY is not set")
|
||||
kwargs["api_key"] = api_key
|
||||
return OpenAIChatCompletionClient(**kwargs)
|
||||
elif api_type == "azure":
|
||||
# Check Azure API key.
|
||||
azure_api_key = os.getenv("AZURE_OPENAI_API_KEY")
|
||||
if azure_api_key is None:
|
||||
raise ValueError("AZURE_OPENAI_API_KEY is not set")
|
||||
kwargs["api_key"] = azure_api_key
|
||||
# Check Azure API endpoint.
|
||||
azure_api_endpoint = os.getenv("AZURE_OPENAI_API_ENDPOINT")
|
||||
if azure_api_endpoint is None:
|
||||
raise ValueError("AZURE_OPENAI_API_ENDPOINT is not set")
|
||||
kwargs["azure_endpoint"] = azure_api_endpoint
|
||||
return AzureOpenAIChatCompletionClient(**kwargs) # type: ignore
|
||||
raise ValueError(f"Unknown API type: {api_type}")
|
||||
|
||||
@@ -6,18 +6,23 @@ chat completion model, and returns the response to the main function.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import sys
|
||||
from dataclasses import dataclass
|
||||
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.components import TypeRoutedAgent, message_handler
|
||||
from agnext.components.models import (
|
||||
ChatCompletionClient,
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
UserMessage,
|
||||
)
|
||||
from agnext.core import CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
@dataclass
|
||||
class Message:
|
||||
@@ -41,7 +46,8 @@ class ChatCompletionAgent(TypeRoutedAgent):
|
||||
async def main() -> None:
|
||||
runtime = SingleThreadedAgentRuntime()
|
||||
agent = runtime.register_and_get(
|
||||
"chat_agent", lambda: ChatCompletionAgent("Chat agent", OpenAIChatCompletionClient(model="gpt-3.5-turbo"))
|
||||
"chat_agent",
|
||||
lambda: ChatCompletionAgent("Chat agent", get_chat_completion_client_from_envs(model="gpt-3.5-turbo")),
|
||||
)
|
||||
|
||||
# Send a message to the agent.
|
||||
|
||||
@@ -11,6 +11,8 @@ and publishes the response.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import sys
|
||||
from dataclasses import dataclass
|
||||
from typing import List
|
||||
|
||||
@@ -20,12 +22,15 @@ from agnext.components.models import (
|
||||
AssistantMessage,
|
||||
ChatCompletionClient,
|
||||
LLMMessage,
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
UserMessage,
|
||||
)
|
||||
from agnext.core import CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
@dataclass
|
||||
class Message:
|
||||
@@ -76,7 +81,7 @@ async def main() -> None:
|
||||
"Jack",
|
||||
lambda: ChatCompletionAgent(
|
||||
description="Jack a comedian",
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
system_messages=[
|
||||
SystemMessage("You are a comedian likes to make jokes. " "When you are done talking, say 'TERMINATE'.")
|
||||
],
|
||||
@@ -87,7 +92,7 @@ async def main() -> None:
|
||||
"Cathy",
|
||||
lambda: ChatCompletionAgent(
|
||||
description="Cathy a poet",
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
system_messages=[
|
||||
SystemMessage("You are a poet likes to write poems. " "When you are done talking, say 'TERMINATE'.")
|
||||
],
|
||||
|
||||
@@ -8,7 +8,7 @@ import sys
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.components import TypeRoutedAgent, message_handler
|
||||
from agnext.components.memory import ChatMemory
|
||||
from agnext.components.models import ChatCompletionClient, OpenAIChatCompletionClient, SystemMessage
|
||||
from agnext.components.models import ChatCompletionClient, SystemMessage
|
||||
from agnext.core import AgentRuntime, CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.dirname(__file__)))
|
||||
@@ -16,7 +16,7 @@ sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.memory import BufferedChatMemory
|
||||
from common.types import Message, TextMessage
|
||||
from common.utils import convert_messages_to_llm_messages
|
||||
from common.utils import convert_messages_to_llm_messages, get_chat_completion_client_from_envs
|
||||
from utils import TextualChatApp, TextualUserAgent, start_runtime
|
||||
|
||||
|
||||
@@ -102,7 +102,7 @@ def chat_room(runtime: AgentRuntime, app: TextualChatApp) -> None:
|
||||
description="Alice in the chat room.",
|
||||
background_story="Alice is a software engineer who loves to code.",
|
||||
memory=BufferedChatMemory(buffer_size=10),
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
),
|
||||
)
|
||||
bob = runtime.register_and_get_proxy(
|
||||
@@ -112,7 +112,7 @@ def chat_room(runtime: AgentRuntime, app: TextualChatApp) -> None:
|
||||
description="Bob in the chat room.",
|
||||
background_story="Bob is a data scientist who loves to analyze data.",
|
||||
memory=BufferedChatMemory(buffer_size=10),
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
),
|
||||
)
|
||||
charlie = runtime.register_and_get_proxy(
|
||||
@@ -122,7 +122,7 @@ def chat_room(runtime: AgentRuntime, app: TextualChatApp) -> None:
|
||||
description="Charlie in the chat room.",
|
||||
background_story="Charlie is a designer who loves to create art.",
|
||||
memory=BufferedChatMemory(buffer_size=10),
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
),
|
||||
)
|
||||
app.welcoming_notice = f"""Welcome to the chat room demo with the following participants:
|
||||
|
||||
@@ -10,7 +10,7 @@ import sys
|
||||
from typing import Annotated, Literal
|
||||
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.components.models import OpenAIChatCompletionClient, SystemMessage
|
||||
from agnext.components.models import SystemMessage
|
||||
from agnext.components.tools import FunctionTool
|
||||
from agnext.core import AgentRuntime
|
||||
from chess import BLACK, SQUARE_NAMES, WHITE, Board, Move
|
||||
@@ -22,6 +22,7 @@ from common.agents._chat_completion_agent import ChatCompletionAgent
|
||||
from common.memory import BufferedChatMemory
|
||||
from common.patterns._group_chat_manager import GroupChatManager
|
||||
from common.types import TextMessage
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
def validate_turn(board: Board, player: Literal["white", "black"]) -> None:
|
||||
@@ -168,7 +169,7 @@ def chess_game(runtime: AgentRuntime) -> None: # type: ignore
|
||||
),
|
||||
],
|
||||
memory=BufferedChatMemory(buffer_size=10),
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
tools=black_tools,
|
||||
),
|
||||
)
|
||||
@@ -185,7 +186,7 @@ def chess_game(runtime: AgentRuntime) -> None: # type: ignore
|
||||
),
|
||||
],
|
||||
memory=BufferedChatMemory(buffer_size=10),
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
tools=white_tools,
|
||||
),
|
||||
)
|
||||
|
||||
@@ -6,7 +6,7 @@ import sys
|
||||
|
||||
import openai
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.components.models import OpenAIChatCompletionClient, SystemMessage
|
||||
from agnext.components.models import SystemMessage
|
||||
from agnext.core import AgentRuntime
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.dirname(__file__)))
|
||||
@@ -15,6 +15,7 @@ sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
from common.agents import ChatCompletionAgent, ImageGenerationAgent
|
||||
from common.memory import BufferedChatMemory
|
||||
from common.patterns._group_chat_manager import GroupChatManager
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
from utils import TextualChatApp, TextualUserAgent, start_runtime
|
||||
|
||||
|
||||
@@ -42,7 +43,7 @@ def illustrator_critics(runtime: AgentRuntime, app: TextualChatApp) -> None:
|
||||
),
|
||||
],
|
||||
memory=BufferedChatMemory(buffer_size=10),
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo", max_tokens=500),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo", max_tokens=500),
|
||||
),
|
||||
)
|
||||
illustrator = runtime.register_and_get_proxy(
|
||||
@@ -70,7 +71,7 @@ def illustrator_critics(runtime: AgentRuntime, app: TextualChatApp) -> None:
|
||||
),
|
||||
],
|
||||
memory=BufferedChatMemory(buffer_size=2),
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
),
|
||||
)
|
||||
runtime.register(
|
||||
|
||||
@@ -17,7 +17,7 @@ import aiofiles
|
||||
import aiohttp
|
||||
import openai
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.components.models import OpenAIChatCompletionClient, SystemMessage
|
||||
from agnext.components.models import SystemMessage
|
||||
from agnext.components.tools import FunctionTool
|
||||
from agnext.core import AgentRuntime
|
||||
from markdownify import markdownify # type: ignore
|
||||
@@ -30,6 +30,7 @@ sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
from common.agents import ChatCompletionAgent
|
||||
from common.memory import HeadAndTailChatMemory
|
||||
from common.patterns._group_chat_manager import GroupChatManager
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
from utils import TextualChatApp, TextualUserAgent, start_runtime
|
||||
|
||||
|
||||
@@ -127,7 +128,7 @@ def software_consultancy(runtime: AgentRuntime, app: TextualChatApp) -> None: #
|
||||
"Be concise and deliver now."
|
||||
)
|
||||
],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
memory=HeadAndTailChatMemory(head_size=1, tail_size=10),
|
||||
tools=[
|
||||
FunctionTool(
|
||||
@@ -167,7 +168,7 @@ def software_consultancy(runtime: AgentRuntime, app: TextualChatApp) -> None: #
|
||||
"Be VERY concise."
|
||||
)
|
||||
],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
memory=HeadAndTailChatMemory(head_size=1, tail_size=10),
|
||||
tools=[
|
||||
FunctionTool(
|
||||
@@ -195,7 +196,7 @@ def software_consultancy(runtime: AgentRuntime, app: TextualChatApp) -> None: #
|
||||
"Be concise and deliver now."
|
||||
)
|
||||
],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
memory=HeadAndTailChatMemory(head_size=1, tail_size=10),
|
||||
tools=[
|
||||
FunctionTool(
|
||||
@@ -227,7 +228,7 @@ def software_consultancy(runtime: AgentRuntime, app: TextualChatApp) -> None: #
|
||||
"Be concise and deliver now."
|
||||
)
|
||||
],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
memory=HeadAndTailChatMemory(head_size=1, tail_size=10),
|
||||
tools=[
|
||||
FunctionTool(
|
||||
@@ -244,7 +245,7 @@ def software_consultancy(runtime: AgentRuntime, app: TextualChatApp) -> None: #
|
||||
lambda: GroupChatManager(
|
||||
description="A group chat manager.",
|
||||
memory=HeadAndTailChatMemory(head_size=1, tail_size=10),
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-4-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo"),
|
||||
participants=[developer, product_manager, ux_designer, illustrator, user_agent],
|
||||
),
|
||||
)
|
||||
|
||||
@@ -13,7 +13,9 @@ otherwise, it generates a new code block and publishes a code execution task mes
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from typing import Dict, List
|
||||
@@ -25,12 +27,15 @@ from agnext.components.models import (
|
||||
AssistantMessage,
|
||||
ChatCompletionClient,
|
||||
LLMMessage,
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
UserMessage,
|
||||
)
|
||||
from agnext.core import CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
@dataclass
|
||||
class TaskMessage:
|
||||
@@ -175,7 +180,7 @@ async def main(task: str, temp_dir: str) -> None:
|
||||
runtime = SingleThreadedAgentRuntime()
|
||||
|
||||
# Register the agents.
|
||||
runtime.register("coder", lambda: Coder(model_client=OpenAIChatCompletionClient(model="gpt-4-turbo")))
|
||||
runtime.register("coder", lambda: Coder(model_client=get_chat_completion_client_from_envs(model="gpt-4-turbo")))
|
||||
runtime.register("executor", lambda: Executor(executor=LocalCommandLineCodeExecutor(work_dir=temp_dir)))
|
||||
|
||||
# Publish the task message.
|
||||
@@ -13,7 +13,9 @@ a new code block and publishes a code review task message.
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from typing import Dict, List, Union
|
||||
@@ -24,12 +26,15 @@ from agnext.components.models import (
|
||||
AssistantMessage,
|
||||
ChatCompletionClient,
|
||||
LLMMessage,
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
UserMessage,
|
||||
)
|
||||
from agnext.core import CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
@dataclass
|
||||
class CodeWritingTask:
|
||||
@@ -250,14 +255,14 @@ async def main() -> None:
|
||||
"ReviewerAgent",
|
||||
lambda: ReviewerAgent(
|
||||
description="Code Reviewer",
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
),
|
||||
)
|
||||
runtime.register(
|
||||
"CoderAgent",
|
||||
lambda: CoderAgent(
|
||||
description="Coder",
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
),
|
||||
)
|
||||
await runtime.publish_message(
|
||||
@@ -1,249 +0,0 @@
|
||||
"""
|
||||
This example shows how to use direct messaging to implement
|
||||
a simple interaction between a coder and a reviewer agent.
|
||||
1. The coder agent receives a code writing task message, generates a code block,
|
||||
and sends a code review task message to the reviewer agent.
|
||||
2. The reviewer agent receives the code review task message, reviews the code block,
|
||||
and sends a code review result message to the coder agent.
|
||||
3. The coder agent receives the code review result message, depending on the result:
|
||||
if the code is approved, it sends a code writing result message; otherwise, it generates
|
||||
a new code block and sends a code review task message.
|
||||
4. The process continues until the coder agent receives an approved code review result message.
|
||||
5. The main function prints the code writing result.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import re
|
||||
from dataclasses import dataclass
|
||||
from typing import List, Union
|
||||
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.components import TypeRoutedAgent, message_handler
|
||||
from agnext.components.models import (
|
||||
AssistantMessage,
|
||||
ChatCompletionClient,
|
||||
LLMMessage,
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
UserMessage,
|
||||
)
|
||||
from agnext.core import AgentId, CancellationToken
|
||||
|
||||
|
||||
@dataclass
|
||||
class CodeWritingTask:
|
||||
task: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class CodeWritingResult:
|
||||
task: str
|
||||
code: str
|
||||
review: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class CodeReviewTask:
|
||||
code_writing_task: str
|
||||
code_writing_scratchpad: str
|
||||
code: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class CodeReviewResult:
|
||||
review: str
|
||||
approved: bool
|
||||
|
||||
|
||||
class ReviewerAgent(TypeRoutedAgent):
|
||||
"""An agent that performs code review tasks."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
description: str,
|
||||
model_client: ChatCompletionClient,
|
||||
) -> None:
|
||||
super().__init__(description)
|
||||
self._system_messages = [
|
||||
SystemMessage(
|
||||
content="""You are a code reviewer. You focus on correctness, efficiency and safety of the code.
|
||||
Respond using the following JSON format:
|
||||
{
|
||||
"correctness": "<Your comments>",
|
||||
"efficiency": "<Your comments>",
|
||||
"safety": "<Your comments>",
|
||||
"approval": "<APPROVE or REVISE>",
|
||||
"suggested_changes": "<Your comments>"
|
||||
}
|
||||
""",
|
||||
)
|
||||
]
|
||||
self._model_client = model_client
|
||||
|
||||
@message_handler
|
||||
async def handle_code_review_task(
|
||||
self, message: CodeReviewTask, cancellation_token: CancellationToken
|
||||
) -> CodeReviewResult:
|
||||
# Format the prompt for the code review.
|
||||
prompt = f"""The problem statement is: {message.code_writing_task}
|
||||
The code is:
|
||||
```
|
||||
{message.code}
|
||||
```
|
||||
Please review the code and provide feedback.
|
||||
"""
|
||||
# Generate a response using the chat completion API.
|
||||
response = await self._model_client.create(
|
||||
self._system_messages + [UserMessage(content=prompt, source=self.metadata["name"])]
|
||||
)
|
||||
assert isinstance(response.content, str)
|
||||
# TODO: use structured generation library e.g. guidance to ensure the response is in the expected format.
|
||||
# Parse the response JSON.
|
||||
review = json.loads(response.content)
|
||||
# Construct the review text.
|
||||
review_text = "Code review:\n" + "\n".join([f"{k}: {v}" for k, v in review.items()])
|
||||
approved = review["approval"].lower().strip() == "approve"
|
||||
# Return the review result.
|
||||
return CodeReviewResult(
|
||||
review=review_text,
|
||||
approved=approved,
|
||||
)
|
||||
|
||||
|
||||
class CoderAgent(TypeRoutedAgent):
|
||||
"""An agent that performs code writing tasks."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
description: str,
|
||||
model_client: ChatCompletionClient,
|
||||
reviewer: AgentId,
|
||||
) -> None:
|
||||
super().__init__(
|
||||
description,
|
||||
)
|
||||
self._system_messages = [
|
||||
SystemMessage(
|
||||
content="""You are a proficient coder. You write code to solve problems.
|
||||
Work with the reviewer to improve your code.
|
||||
Always put all finished code in a single Markdown code block.
|
||||
For example:
|
||||
```python
|
||||
def hello_world():
|
||||
print("Hello, World!")
|
||||
```
|
||||
|
||||
Respond using the following format:
|
||||
|
||||
Thoughts: <Your comments>
|
||||
Code: <Your code>
|
||||
""",
|
||||
)
|
||||
]
|
||||
self._model_client = model_client
|
||||
self._reviewer = reviewer
|
||||
|
||||
@message_handler
|
||||
async def handle_code_writing_task(
|
||||
self,
|
||||
message: CodeWritingTask,
|
||||
cancellation_token: CancellationToken,
|
||||
) -> CodeWritingResult:
|
||||
# Store the messages in a temporary memory for this request only.
|
||||
memory: List[CodeWritingTask | CodeReviewTask | CodeReviewResult] = []
|
||||
memory.append(message)
|
||||
# Keep generating responses until the code is approved.
|
||||
while not (isinstance(memory[-1], CodeReviewResult) and memory[-1].approved):
|
||||
# Create a list of LLM messages to send to the model.
|
||||
messages: List[LLMMessage] = [*self._system_messages]
|
||||
for m in memory:
|
||||
if isinstance(m, CodeReviewResult):
|
||||
messages.append(UserMessage(content=m.review, source="Reviewer"))
|
||||
elif isinstance(m, CodeReviewTask):
|
||||
messages.append(AssistantMessage(content=m.code_writing_scratchpad, source="Coder"))
|
||||
elif isinstance(m, CodeWritingTask):
|
||||
messages.append(UserMessage(content=m.task, source="User"))
|
||||
else:
|
||||
raise ValueError(f"Unexpected message type: {m}")
|
||||
# Generate a revision using the chat completion API.
|
||||
response = await self._model_client.create(messages)
|
||||
assert isinstance(response.content, str)
|
||||
# Extract the code block from the response.
|
||||
code_block = self._extract_code_block(response.content)
|
||||
if code_block is None:
|
||||
raise ValueError("Code block not found.")
|
||||
# Create a code review task.
|
||||
code_review_task = CodeReviewTask(
|
||||
code_writing_task=message.task,
|
||||
code_writing_scratchpad=response.content,
|
||||
code=code_block,
|
||||
)
|
||||
# Store the code review task in the session memory.
|
||||
memory.append(code_review_task)
|
||||
# Send the code review task to the reviewer.
|
||||
result = await self.send_message(code_review_task, self._reviewer)
|
||||
# Store the review result in the session memory.
|
||||
memory.append(await result)
|
||||
# Obtain the request from previous messages.
|
||||
review_request = next(m for m in reversed(memory) if isinstance(m, CodeReviewTask))
|
||||
assert review_request is not None
|
||||
# Publish the code writing result.
|
||||
return CodeWritingResult(
|
||||
task=message.task,
|
||||
code=review_request.code,
|
||||
review=memory[-1].review,
|
||||
)
|
||||
|
||||
def _extract_code_block(self, markdown_text: str) -> Union[str, None]:
|
||||
pattern = r"```(\w+)\n(.*?)\n```"
|
||||
# Search for the pattern in the markdown text
|
||||
match = re.search(pattern, markdown_text, re.DOTALL)
|
||||
# Extract the language and code block if a match is found
|
||||
if match:
|
||||
return match.group(2)
|
||||
return None
|
||||
|
||||
|
||||
async def main() -> None:
|
||||
runtime = SingleThreadedAgentRuntime()
|
||||
reviewer = runtime.register_and_get(
|
||||
"ReviewerAgent",
|
||||
lambda: ReviewerAgent(
|
||||
description="Code Reviewer",
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
),
|
||||
)
|
||||
coder = runtime.register_and_get(
|
||||
"CoderAgent",
|
||||
lambda: CoderAgent(
|
||||
description="Coder",
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
reviewer=reviewer,
|
||||
),
|
||||
)
|
||||
result = await runtime.send_message(
|
||||
message=CodeWritingTask(
|
||||
task="Write a function to find the directory with the largest number of files using multi-processing."
|
||||
),
|
||||
recipient=coder,
|
||||
)
|
||||
while not result.done():
|
||||
await runtime.process_next()
|
||||
code_writing_result = result.result()
|
||||
assert isinstance(code_writing_result, CodeWritingResult)
|
||||
print("Code Writing Result:")
|
||||
print("-" * 80)
|
||||
print(f"Task:\n{code_writing_result.task}")
|
||||
print("-" * 80)
|
||||
print(f"Code:\n{code_writing_result.code}")
|
||||
print("-" * 80)
|
||||
print(f"Review:\n{code_writing_result.review}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import logging
|
||||
|
||||
logging.basicConfig(level=logging.WARNING)
|
||||
logging.getLogger("agnext").setLevel(logging.DEBUG)
|
||||
asyncio.run(main())
|
||||
@@ -12,6 +12,8 @@ to the last message in the memory and publishes the response.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import sys
|
||||
from dataclasses import dataclass
|
||||
from typing import List
|
||||
|
||||
@@ -21,12 +23,15 @@ from agnext.components.models import (
|
||||
AssistantMessage,
|
||||
ChatCompletionClient,
|
||||
LLMMessage,
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
UserMessage,
|
||||
)
|
||||
from agnext.core import AgentId, CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
@dataclass
|
||||
class Message:
|
||||
@@ -113,7 +118,7 @@ async def main() -> None:
|
||||
lambda: GroupChatParticipant(
|
||||
description="A data scientist",
|
||||
system_messages=[SystemMessage("You are a data scientist.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
),
|
||||
)
|
||||
agent2 = runtime.register_and_get(
|
||||
@@ -121,7 +126,7 @@ async def main() -> None:
|
||||
lambda: GroupChatParticipant(
|
||||
description="An engineer",
|
||||
system_messages=[SystemMessage("You are an engineer.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
),
|
||||
)
|
||||
agent3 = runtime.register_and_get(
|
||||
@@ -129,7 +134,7 @@ async def main() -> None:
|
||||
lambda: GroupChatParticipant(
|
||||
description="An artist",
|
||||
system_messages=[SystemMessage("You are an artist.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
),
|
||||
)
|
||||
|
||||
@@ -8,15 +8,21 @@ The reference agents handle each task independently and return the results to th
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import sys
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from typing import Dict, List
|
||||
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.components import TypeRoutedAgent, message_handler
|
||||
from agnext.components.models import ChatCompletionClient, OpenAIChatCompletionClient, SystemMessage, UserMessage
|
||||
from agnext.components.models import ChatCompletionClient, SystemMessage, UserMessage
|
||||
from agnext.core import CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
@dataclass
|
||||
class ReferenceAgentTask:
|
||||
@@ -111,7 +117,7 @@ async def main() -> None:
|
||||
lambda: ReferenceAgent(
|
||||
description="Reference Agent 1",
|
||||
system_messages=[SystemMessage("You are a helpful assistant that can answer questions.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo", temperature=0.1),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo", temperature=0.1),
|
||||
),
|
||||
)
|
||||
runtime.register(
|
||||
@@ -119,7 +125,7 @@ async def main() -> None:
|
||||
lambda: ReferenceAgent(
|
||||
description="Reference Agent 2",
|
||||
system_messages=[SystemMessage("You are a helpful assistant that can answer questions.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo", temperature=0.5),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo", temperature=0.5),
|
||||
),
|
||||
)
|
||||
runtime.register(
|
||||
@@ -127,7 +133,7 @@ async def main() -> None:
|
||||
lambda: ReferenceAgent(
|
||||
description="Reference Agent 3",
|
||||
system_messages=[SystemMessage("You are a helpful assistant that can answer questions.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo", temperature=1.0),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo", temperature=1.0),
|
||||
),
|
||||
)
|
||||
runtime.register(
|
||||
@@ -139,7 +145,7 @@ async def main() -> None:
|
||||
"...synthesize these responses into a single, high-quality response... Responses from models:"
|
||||
)
|
||||
],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
num_references=3,
|
||||
),
|
||||
)
|
||||
@@ -1,146 +0,0 @@
|
||||
"""
|
||||
This example demonstrates the mixture of agents implemented using direct
|
||||
messaging and async gathering of results.
|
||||
Mixture of agents: https://github.com/togethercomputer/moa
|
||||
|
||||
The example consists of two types of agents: reference agents and an aggregator agent.
|
||||
The aggregator agent distributes tasks to reference agents and aggregates the results.
|
||||
The reference agents handle each task independently and return the results to the aggregator agent.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
from dataclasses import dataclass
|
||||
from typing import List
|
||||
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.components import TypeRoutedAgent, message_handler
|
||||
from agnext.components.models import ChatCompletionClient, OpenAIChatCompletionClient, SystemMessage, UserMessage
|
||||
from agnext.core import AgentId, CancellationToken
|
||||
|
||||
|
||||
@dataclass
|
||||
class ReferenceAgentTask:
|
||||
task: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class ReferenceAgentTaskResult:
|
||||
result: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class AggregatorTask:
|
||||
task: str
|
||||
|
||||
|
||||
@dataclass
|
||||
class AggregatorTaskResult:
|
||||
result: str
|
||||
|
||||
|
||||
class ReferenceAgent(TypeRoutedAgent):
|
||||
"""The reference agent that handles each task independently."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
description: str,
|
||||
system_messages: List[SystemMessage],
|
||||
model_client: ChatCompletionClient,
|
||||
) -> None:
|
||||
super().__init__(description)
|
||||
self._system_messages = system_messages
|
||||
self._model_client = model_client
|
||||
|
||||
@message_handler
|
||||
async def handle_task(
|
||||
self, message: ReferenceAgentTask, cancellation_token: CancellationToken
|
||||
) -> ReferenceAgentTaskResult:
|
||||
"""Handle a task message. This method sends the task to the model and respond with the result."""
|
||||
task_message = UserMessage(content=message.task, source=self.metadata["name"])
|
||||
response = await self._model_client.create(self._system_messages + [task_message])
|
||||
assert isinstance(response.content, str)
|
||||
return ReferenceAgentTaskResult(result=response.content)
|
||||
|
||||
|
||||
class AggregatorAgent(TypeRoutedAgent):
|
||||
"""The aggregator agent that distribute tasks to reference agents and aggregates the results."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
description: str,
|
||||
system_messages: List[SystemMessage],
|
||||
model_client: ChatCompletionClient,
|
||||
references: List[AgentId],
|
||||
) -> None:
|
||||
super().__init__(description)
|
||||
self._system_messages = system_messages
|
||||
self._model_client = model_client
|
||||
self._references = references
|
||||
|
||||
@message_handler
|
||||
async def handle_task(self, message: AggregatorTask, cancellation_token: CancellationToken) -> AggregatorTaskResult:
|
||||
"""Handle a task message. This method sends the task to the reference agents
|
||||
and aggregates the results."""
|
||||
ref_task = ReferenceAgentTask(task=message.task)
|
||||
results: List[ReferenceAgentTaskResult] = await asyncio.gather(
|
||||
*[await self.send_message(ref_task, ref) for ref in self._references]
|
||||
)
|
||||
combined_result = "\n\n".join([r.result for r in results])
|
||||
response = await self._model_client.create(
|
||||
self._system_messages + [UserMessage(content=combined_result, source=self.metadata["name"])]
|
||||
)
|
||||
assert isinstance(response.content, str)
|
||||
return AggregatorTaskResult(result=response.content)
|
||||
|
||||
|
||||
async def main() -> None:
|
||||
runtime = SingleThreadedAgentRuntime()
|
||||
ref1 = runtime.register_and_get(
|
||||
"ReferenceAgent1",
|
||||
lambda: ReferenceAgent(
|
||||
description="Reference Agent 1",
|
||||
system_messages=[SystemMessage("You are a helpful assistant that can answer questions.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo", temperature=0.1),
|
||||
),
|
||||
)
|
||||
ref2 = runtime.register_and_get(
|
||||
"ReferenceAgent2",
|
||||
lambda: ReferenceAgent(
|
||||
description="Reference Agent 2",
|
||||
system_messages=[SystemMessage("You are a helpful assistant that can answer questions.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo", temperature=0.5),
|
||||
),
|
||||
)
|
||||
ref3 = runtime.register_and_get(
|
||||
"ReferenceAgent3",
|
||||
lambda: ReferenceAgent(
|
||||
description="Reference Agent 3",
|
||||
system_messages=[SystemMessage("You are a helpful assistant that can answer questions.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo", temperature=1.0),
|
||||
),
|
||||
)
|
||||
agg = runtime.register_and_get(
|
||||
"AggregatorAgent",
|
||||
lambda: AggregatorAgent(
|
||||
description="Aggregator Agent",
|
||||
system_messages=[
|
||||
SystemMessage(
|
||||
"...synthesize these responses into a single, high-quality response... Responses from models:"
|
||||
)
|
||||
],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
references=[ref1, ref2, ref3],
|
||||
),
|
||||
)
|
||||
result = await runtime.send_message(AggregatorTask(task="What are something fun to do in SF?"), agg)
|
||||
while result.done() is False:
|
||||
await runtime.process_next()
|
||||
print(result.result())
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import logging
|
||||
|
||||
logging.basicConfig(level=logging.WARNING)
|
||||
logging.getLogger("agnext").setLevel(logging.DEBUG)
|
||||
asyncio.run(main())
|
||||
@@ -32,7 +32,9 @@ to sample a random number of neighbors' responses to use.
|
||||
|
||||
import asyncio
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from typing import Dict, List
|
||||
@@ -43,12 +45,15 @@ from agnext.components.models import (
|
||||
AssistantMessage,
|
||||
ChatCompletionClient,
|
||||
LLMMessage,
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
UserMessage,
|
||||
)
|
||||
from agnext.core import CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.setLevel(logging.DEBUG)
|
||||
|
||||
@@ -209,7 +214,7 @@ async def main(question: str) -> None:
|
||||
runtime.register(
|
||||
"MathSolver1",
|
||||
lambda: MathSolver(
|
||||
OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
neighbor_names=["MathSolver2", "MathSolver4"],
|
||||
max_round=3,
|
||||
),
|
||||
@@ -217,7 +222,7 @@ async def main(question: str) -> None:
|
||||
runtime.register(
|
||||
"MathSolver2",
|
||||
lambda: MathSolver(
|
||||
OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
neighbor_names=["MathSolver1", "MathSolver3"],
|
||||
max_round=3,
|
||||
),
|
||||
@@ -225,7 +230,7 @@ async def main(question: str) -> None:
|
||||
runtime.register(
|
||||
"MathSolver3",
|
||||
lambda: MathSolver(
|
||||
OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
neighbor_names=["MathSolver2", "MathSolver4"],
|
||||
max_round=3,
|
||||
),
|
||||
@@ -233,7 +238,7 @@ async def main(question: str) -> None:
|
||||
runtime.register(
|
||||
"MathSolver4",
|
||||
lambda: MathSolver(
|
||||
OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
neighbor_names=["MathSolver1", "MathSolver3"],
|
||||
max_round=3,
|
||||
),
|
||||
@@ -12,6 +12,8 @@ list of function calls.
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from dataclasses import dataclass
|
||||
from typing import List
|
||||
|
||||
@@ -24,13 +26,16 @@ from agnext.components.models import (
|
||||
FunctionExecutionResult,
|
||||
FunctionExecutionResultMessage,
|
||||
LLMMessage,
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
UserMessage,
|
||||
)
|
||||
from agnext.components.tools import PythonCodeExecutionTool, Tool
|
||||
from agnext.core import CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
@dataclass
|
||||
class ToolExecutionTask:
|
||||
@@ -130,7 +135,7 @@ async def main() -> None:
|
||||
lambda: ToolEnabledAgent(
|
||||
description="Tool Use Agent",
|
||||
system_messages=[SystemMessage("You are a helpful AI Assistant. Use your tools to solve problems.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
tools=tools,
|
||||
),
|
||||
)
|
||||
|
||||
@@ -13,6 +13,8 @@ the results back to the tool use agent.
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import uuid
|
||||
from dataclasses import dataclass
|
||||
from typing import Dict, List
|
||||
@@ -26,13 +28,16 @@ from agnext.components.models import (
|
||||
FunctionExecutionResult,
|
||||
FunctionExecutionResultMessage,
|
||||
LLMMessage,
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
UserMessage,
|
||||
)
|
||||
from agnext.components.tools import PythonCodeExecutionTool, Tool
|
||||
from agnext.core import CancellationToken
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
@dataclass
|
||||
class ToolExecutionTask:
|
||||
@@ -192,7 +197,7 @@ async def main() -> None:
|
||||
lambda: ToolUseAgent(
|
||||
description="Tool Use Agent",
|
||||
system_messages=[SystemMessage("You are a helpful AI Assistant. Use your tools to solve problems.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
tools=tools,
|
||||
),
|
||||
)
|
||||
|
||||
@@ -10,15 +10,16 @@ import sys
|
||||
|
||||
from agnext.application import SingleThreadedAgentRuntime
|
||||
from agnext.components.models import (
|
||||
OpenAIChatCompletionClient,
|
||||
SystemMessage,
|
||||
)
|
||||
from agnext.components.tools import FunctionTool
|
||||
from typing_extensions import Annotated
|
||||
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__))))
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
|
||||
|
||||
from coding_one_agent_direct import AIResponse, ToolEnabledAgent, UserRequest
|
||||
from common.utils import get_chat_completion_client_from_envs
|
||||
|
||||
|
||||
async def get_stock_price(ticker: str, date: Annotated[str, "The date in YYYY/MM/DD format."]) -> float:
|
||||
@@ -36,7 +37,7 @@ async def main() -> None:
|
||||
lambda: ToolEnabledAgent(
|
||||
description="Tool Use Agent",
|
||||
system_messages=[SystemMessage("You are a helpful AI Assistant. Use your tools to solve problems.")],
|
||||
model_client=OpenAIChatCompletionClient(model="gpt-3.5-turbo"),
|
||||
model_client=get_chat_completion_client_from_envs(model="gpt-3.5-turbo"),
|
||||
tools=[
|
||||
# Define a tool that gets the stock price.
|
||||
FunctionTool(
|
||||
|
||||
Reference in New Issue
Block a user