mirror of
https://github.com/microsoft/autogen.git
synced 2026-04-20 03:02:16 -04:00
Add AOAI Support in AGS (#4718)
* add oai support, improve component config typing, minor updates to docs, update ags tests * faq updates * update faq, add model_capabilities * update faq
This commit is contained in:
@@ -13,8 +13,54 @@ A: You can specify the directory where files are stored by setting the `--appdir
|
||||
|
||||
## Q: Can I use other models with AutoGen Studio?
|
||||
|
||||
Yes. AutoGen standardizes on the openai model api format, and you can use any api server that offers an openai compliant endpoint. In the AutoGen Studio UI, each agent has an `model_client` field where you can input your model endpoint details including `model`, `api key`, `base url`, `model type` and `api version`. For Azure OpenAI models, you can find these details in the Azure portal. Note that for Azure OpenAI, the `model name` is the deployment id or engine, and the `model type` is "azure".
|
||||
For other OSS models, we recommend using a server such as vllm, LMStudio, Ollama, to instantiate an openai compliant endpoint.
|
||||
Yes. AutoGen standardizes on the openai model api format, and you can use any api server that offers an openai compliant endpoint.
|
||||
|
||||
AutoGen Studio is based on declaritive specifications which applies to models as well. Agents can include a model_client field which specifies the model endpoint details including `model`, `api_key`, `base_url`, `model type`.
|
||||
|
||||
An example of the openai model client is shown below:
|
||||
|
||||
```json
|
||||
{
|
||||
"model": "gpt-4o-mini",
|
||||
"model_type": "OpenAIChatCompletionClient",
|
||||
"api_key": "your-api-key"
|
||||
}
|
||||
```
|
||||
|
||||
An example of the azure openai model client is shown below:
|
||||
|
||||
```json
|
||||
{
|
||||
"model": "gpt-4o-mini",
|
||||
"model_type": "AzureOpenAIChatCompletionClient",
|
||||
"azure_deployment": "gpt-4o-mini",
|
||||
"api_version": "2024-02-15-preview",
|
||||
"azure_endpoint": "https://your-endpoint.openai.azure.com/",
|
||||
"api_key": "your-api-key",
|
||||
"component_type": "model"
|
||||
}
|
||||
```
|
||||
|
||||
Have a local model server like Ollama, vLLM or LMStudio that provide an OpenAI compliant endpoint? You can use that as well.
|
||||
|
||||
```json
|
||||
{
|
||||
"model": "TheBloke/Mistral-7B-Instruct-v0.2-GGUF",
|
||||
"model_type": "OpenAIChatCompletionClient",
|
||||
"base_url": "http://localhost:1234/v1",
|
||||
"api_version": "1.0",
|
||||
"component_type": "model",
|
||||
"model_capabilities": {
|
||||
"vision": false,
|
||||
"function_calling": false,
|
||||
"json_output": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```{caution}
|
||||
It is important that you add the `model_capabilities` field to the model client specification for custom models. This is used by the framework instantiate and use the model correctly.
|
||||
```
|
||||
|
||||
## Q: The server starts but I can't access the UI
|
||||
|
||||
|
||||
@@ -13,27 +13,27 @@ The expected usage behavior is that developers use the provided Team Builder int
|
||||
|
||||
## Building an Agent Team
|
||||
|
||||
AutoGen Studio is tied very closely with all of the component abstractions provided by AutoGen AgentChat. This includes - {py:class}`~autogen_agentchat.teams`, {py:class}`~autogen_agentchat.agents`, {py:class}`~autogen_core.models`, {py:class}`~autogen_core.tools`, {py:class}`~autogen_agentchat.conditions`.
|
||||
AutoGen Studio is tied very closely with all of the component abstractions provided by AutoGen AgentChat. This includes - {py:class}`~autogen_agentchat.teams`, {py:class}`~autogen_agentchat.agents`, {py:class}`~autogen_core.models`, {py:class}`~autogen_core.tools`, termination {py:class}`~autogen_agentchat.conditions`.
|
||||
|
||||
Users can define these components in the Team Builder interface either via a declarative specification or by dragging and dropping components from a component library.
|
||||
|
||||
## Testing an Agent Team
|
||||
## Interactively Running Teams
|
||||
|
||||
AutoGen Studio Playground allows users to interactively test teams on tasks and review resulting artifacts (such as images, code, and documents).
|
||||
AutoGen Studio Playground allows users to interactively test teams on tasks and review resulting artifacts (such as images, code, and text).
|
||||
|
||||
Users can also review the “inner monologue” of team as they address tasks, and view profiling information such as costs associated with the run (such as number of turns, number of tokens etc.), and agent actions (such as whether tools were called and the outcomes of code execution).
|
||||
|
||||
## Importing and Reusing Team Configurations
|
||||
|
||||
AutoGen Studio provides a Gallery view where users can import components from 3rd party community sources. This allows users to reuse and share team configurations with others.
|
||||
AutoGen Studio provides a Gallery view which provides a built-in default gallery. A Gallery is simply is a collection of components - teams, agents, models tools etc. Furthermore, users can import components from 3rd party community sources either by providing a URL to a JSON Gallery spec or pasting in the gallery JSON. This allows users to reuse and share team configurations with others.
|
||||
|
||||
- Gallery -> New Gallery -> Import
|
||||
- Set as default gallery
|
||||
- Reuse components in Team Builder
|
||||
- Set as default gallery (in side bar, by clicking pin icon)
|
||||
- Reuse components in Team Builder. Team Builder -> Sidebar -> From Gallery
|
||||
|
||||
### Using AutoGen Studio Teams in a Python Application
|
||||
|
||||
An exported team can be easily integrated into any Python application using the `TeamManager` class with just two lines of code. Underneath, the `TeamManager` rehydrates the workflow specification into AutoGen agents that are subsequently used to address tasks.
|
||||
An exported team can be easily integrated into any Python application using the `TeamManager` class with just two lines of code. Underneath, the `TeamManager` rehydrates the team specification into AutoGen AgentChat agents that are subsequently used to address tasks.
|
||||
|
||||
```python
|
||||
|
||||
@@ -44,12 +44,14 @@ result_stream = tm.run(task="What is the weather in New York?", team_config="te
|
||||
|
||||
```
|
||||
|
||||
To export a team configuration, click on the export button in the Team Builder interface. This will generate a JSON file that can be used to rehydrate the team in a Python application.
|
||||
|
||||
<!-- ### Deploying AutoGen Studio Teams as APIs
|
||||
|
||||
The team can be launched as an API endpoint from the command line using the autogenstudio commandline tool.
|
||||
|
||||
```bash
|
||||
autogenstudio serve --workflow=workflow.json --port=5000
|
||||
autogenstudio serve --team=team.json --port=5000
|
||||
```
|
||||
|
||||
Similarly, the workflow launch command above can be wrapped into a Dockerfile that can be deployed on cloud services like Azure Container Apps or Azure Web Apps. -->
|
||||
Similarly, the team launch command above can be wrapped into a Dockerfile that can be deployed on cloud services like Azure Container Apps or Azure Web Apps. -->
|
||||
|
||||
Reference in New Issue
Block a user