Files
autogen/website/docs/tutorial/what-is-next.md
Eric Zhu 74298cda2c AutoGen Tutorial (#1702)
* update intro

* update intro

* tutorial

* update notebook

* update notebooks

* update

* merge

* add conversation patterns

* rename; delete unused files.

* Reorganize new guides

* Improve intro, fix typos

* add what is next

* outline for code executor

* initiate chats png

* Improve language

* Improve language of human in the loop tutorial

* update

* update

* Update group chat

* code executor

* update convsersation patterns

* update code executor section to use legacy code executor

* update conversation pattern

* redirect

* update figures

* update whats next

* Break down chapter 2 into two chapters

* udpate

* fix website build

* Minor corrections of typos and grammar.

* remove broken links, update sidebar

* code executor update

* Suggest changes to the code executor section

* update what is next

* reorder

* update getting started

* title

* update navbar

* Delete website/docs/tutorial/what-is-next.ipynb

* update conversable patterns

* Improve language

* Fix typo

* minor fixes

---------

Co-authored-by: Jack Gerrits <jack@jackgerrits.com>
Co-authored-by: gagb <gagb@users.noreply.github.com>
Co-authored-by: Joshua Kim <joshua@spectdata.com>
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
2024-03-09 17:45:58 +00:00

41 lines
1.8 KiB
Markdown

# What is Next?
Now that you have learned the basics of AutoGen, you can start to build your own
agents. Here are some ideas to get you started without going to the advanced
topics:
1. **Chat with LLMs**: In [Human in the Loop](./human-in-the-loop) we covered
the basic human-in-the-loop usage. You can try to hook up different LLMs
using proxy servers like [Ollama](https://github.com/ollama/ollama), and
chat with them using the human-in-the-loop component of your human proxy
agent.
2. **Prompt Engineering**: In [Code Executors](./code-executors) we
covered the simple two agent scenario using GPT-4 and Python code executor.
To make this scenario work for different LLMs and programming languages, you
probably need to tune the system message of the code writer agent. Same with
other scenarios that we have covered in this tutorial, you can also try to
tune system messages for different LLMs.
3. **Complex Tasks**: In [ConversationPatterns](./conversation-patterns)
we covered the basic conversation patterns. You can try to find other tasks
that can be decomposed into these patterns, and leverage the code executors
to make the agents more powerful.
## Dig Deeper
- Read the [topic docs](/docs/topics) to learn more
- Read the examples and guides in the [notebooks section](/docs/notebooks)
## Get Help
If you have any questions, you can ask in our [GitHub
Discussions](https://github.com/microsoft/autogen/discussions), or join
our [Discord Server](https://discord.gg/pAbnFJrkgZ).
[![](https://img.shields.io/discord/1153072414184452236?logo=discord&style=flat.png)](https://discord.gg/pAbnFJrkgZ)
## Get Involved
- Contribute your work to our [gallery](../Gallery)
- Follow our [contribution guide](../Contribute) to make a pull request to AutoGen
- You can also share your work with the community on the Discord server.