* api_base -> base_url (#383) * InvalidRequestError -> BadRequestError (#389) * remove api_key_path; close #388 * close #402 (#403) * openai client (#419) * openai client * client test * _client -> client * _client -> client * extra kwargs * Completion -> client (#426) * Completion -> client * Completion -> client * Completion -> client * Completion -> client * support aoai * fix test error * remove commented code * support aoai * annotations * import * reduce test * skip test * skip test * skip test * debug test * rename test * update workflow * update workflow * env * py version * doc improvement * docstr update * openai<1 * add tiktoken to dependency * filter_func * async test * dependency * migration guide (#477) * migration guide * change in kwargs * simplify header * update optigude description * deal with azure gpt-3.5 * add back test_eval_math_responses * timeout * Add back tests for RetrieveChat (#480) * Add back tests for RetrieveChat * Fix format * Update dependencies order * Fix path * Fix path * Fix path * Fix tests * Add not run openai on MacOS or Win * Update skip openai tests * Remove unnecessary dependencies, improve format * Add py3.8 for testing qdrant * Fix multiline error of windows * Add openai tests * Add dependency mathchat, remove unused envs * retrieve chat is tested * bump version to 0.2.0b1 --------- Co-authored-by: Li Jiang <bnujli@gmail.com>
4.5 KiB
Installation
Setup Virtual Environment
When not using a docker container, we recommend using a virtual environment to install AutoGen. This will ensure that the dependencies for AutoGen are isolated from the rest of your system.
Option 1: venv
You can create a virtual environment with venv as below:
python3 -m venv pyautogen
source pyautogen/bin/activate
The following command will deactivate the current venv environment:
deactivate
Option 2: conda
Another option is with Conda, Conda works better at solving dependency conflicts than pip. You can install it by following this doc,
and then create a virtual environment as below:
conda create -n pyautogen python=3.10 # python 3.10 is recommended as it's stable and not too old
conda activate pyautogen
The following command will deactivate the current conda environment:
conda deactivate
Now, you're ready to install AutoGen in the virtual environment you've just created.
Python
AutoGen requires Python version >= 3.8, < 3.12. It can be installed from pip:
pip install pyautogen
pyautogen<0.2 requires openai<1. Starting from pyautogen v0.2, openai>=1 is required.
Migration guide to v0.2
openai v1 is a total rewrite of the library with many breaking changes. For example, the inference requires instantiating a client, instead of using a global class method.
Therefore, some changes are required for users of pyautogen<0.2.
api_base->base_url,request_timeout->timeoutinllm_configandconfig_list.max_retry_periodandretry_wait_timeare deprecated.max_retriescan be set for each client.- MathChat, TeachableAgent are unsupported until they are tested in future release.
autogen.Completionandautogen.ChatCompletionare deprecated. The essential functionalities are moved toautogen.OpenAIWrapper:
from autogen import OpenAIWrapper
client = OpenAIWrapper(config_list=config_list)
response = client.create(messages=[{"role": "user", "content": "2+2="}])
print(client.extract_text_or_function_call(response))
- Inference parameter tuning and inference logging features are currently unavailable in
OpenAIWrapper. Logging will be added in a future release. Inference parameter tuning can be done viaflaml.tune. use_cacheis removed as a kwarg inOpenAIWrapper.create()for being automatically decided byseed: int | None.
Optional Dependencies
- docker
For the best user experience and seamless code execution, we highly recommend using Docker with AutoGen. Docker is a containerization platform that simplifies the setup and execution of your code. Developing in a docker container, such as GitHub Codespace, also makes the development convenient.
When running AutoGen out of a docker container, to use docker for code execution, you also need to install the python package docker:
pip install docker
- blendsearch
pyautogen<0.2 offers a cost-effective hyperparameter optimization technique EcoOptiGen for tuning Large Language Models. Please install with the [blendsearch] option to use it.
pip install "pyautogen[blendsearch]<0.2"
Example notebooks: Optimize for Code Generation, Optimize for Math
- retrievechat
pyautogen<0.2 supports retrieval-augmented generation tasks such as question answering and code generation with RAG agents. Please install with the [retrievechat] option to use it.
pip install "pyautogen[retrievechat]<0.2"
Example notebooks: Automated Code Generation and Question Answering with Retrieval Augmented Agents, Group Chat with Retrieval Augmented Generation (with 5 group member agents and 1 manager agent)
- mathchat
pyautogen<0.2 offers an experimental agent for math problem solving. Please install with the [mathchat] option to use it.
pip install "pyautogen[mathchat]<0.2"
Example notebooks: Using MathChat to Solve Math Problems