mirror of
https://github.com/microsoft/autogen.git
synced 2026-04-20 03:02:16 -04:00
Fix documentation (#1075)
* Fix indentation in documentation * newline * version
This commit is contained in:
@@ -23,30 +23,30 @@ There are several ways of using flaml:
|
||||
#### (New) [Auto Generation](/docs/Use-Cases/Auto-Generation)
|
||||
|
||||
Maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4, including:
|
||||
- A drop-in replacement of `openai.Completion` or `openai.ChatCompletion` with powerful functionalites like tuning, caching, templating, filtering. For example, you can optimize generations by LLM with your own tuning data, success metrics and budgets.
|
||||
```python
|
||||
from flaml import oai
|
||||
- A drop-in replacement of `openai.Completion` or `openai.ChatCompletion` with powerful functionalites like tuning, caching, templating, filtering. For example, you can optimize generations by LLM with your own tuning data, success metrics and budgets.
|
||||
```python
|
||||
from flaml import oai
|
||||
|
||||
# perform tuning
|
||||
config, analysis = oai.Completion.tune(
|
||||
data=tune_data,
|
||||
metric="success",
|
||||
mode="max",
|
||||
eval_func=eval_func,
|
||||
inference_budget=0.05,
|
||||
optimization_budget=3,
|
||||
num_samples=-1,
|
||||
)
|
||||
# perform tuning
|
||||
config, analysis = oai.Completion.tune(
|
||||
data=tune_data,
|
||||
metric="success",
|
||||
mode="max",
|
||||
eval_func=eval_func,
|
||||
inference_budget=0.05,
|
||||
optimization_budget=3,
|
||||
num_samples=-1,
|
||||
)
|
||||
|
||||
# perform inference for a test instance
|
||||
response = oai.Completion.create(context=test_instance, **config)
|
||||
```
|
||||
- LLM-driven intelligent agents which can perform tasks autonomously or with human feedback, including tasks that require using tools via code. For example,
|
||||
```python
|
||||
assistant = AssistantAgent("assistant")
|
||||
user = UserProxyAgent("user", human_input_mode="TERMINATE")
|
||||
assistant.receive("Draw a rocket and save to a file named 'rocket.svg'")
|
||||
```
|
||||
# perform inference for a test instance
|
||||
response = oai.Completion.create(context=test_instance, **config)
|
||||
```
|
||||
- LLM-driven intelligent agents which can perform tasks autonomously or with human feedback, including tasks that require using tools via code. For example,
|
||||
```python
|
||||
assistant = AssistantAgent("assistant")
|
||||
user = UserProxyAgent("user", human_input_mode="TERMINATE")
|
||||
assistant.receive("Draw a rocket and save to a file named 'rocket.svg'")
|
||||
```
|
||||
|
||||
#### [Task-oriented AutoML](/docs/Use-Cases/task-oriented-automl)
|
||||
|
||||
|
||||
Reference in New Issue
Block a user