* Initial commit of the autogen testbed environment.
* Fixed some typos in the Testbed README.md
* Added some stricter termination logic to the two_agent scenario, and swiched the logo task from finding Autogen's logo, to finding Microsoft's (it's easier)
* Added documentation to testbed code in preparation for PR
* Added a variation of HumanEval to the Testbed. It is also a reasonable example of how to integrate other benchmarks.
* Removed ChatCompletion.start_logging and related features. Added an explicit TERMINATE output to HumanEval to save 1 turn in each conversation.
* Added metrics utils script for HumanEval
* Updated the requirements in the README.
* Added documentation for HumanEval csv schemas
* Standardized on how the OAI_CONFIG_LIST is handled.
* Removed dot-slash from 'includes' path for cross-platform compatibility
* Missed a file.
* Updated readme to include known-working versions.
* Adding async support to get_human_input
* Adjust code for Code formatting testing fail
* Adjust the test_async_get_human_input.py to run async on test
* Adjust the test_async_get_human_input.py for pre-commit-check error
* Adjust the test_async_get_human_input.py for pre-commit-check error v2
* Adjust remove unnecessary register_reply
* Adjust test to use asyncio call
* Adjust go back to not use asyncio
* async run group chat
* conversible agent allow async functions to generate reply
* test for async execution
---------
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update FAQ with workaround for Issue #251
* Update website/docs/FAQ.md
* Update website/docs/FAQ.md
---------
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update Installation.md
Replace autogen->pyautogen in env setup to avoid confusion
Related issue: #211
* Update Installation.md
Add deactivation instructions
* Update website/docs/Installation.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
---------
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* UPDATE - FAQ section in documentation
* FIX - formatting test failure
* FIX - added disclaimer
* pre-commit
* Update website/docs/FAQ.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update website/docs/FAQ.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update website/docs/FAQ.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* UPDATE - notebook and FAQ information for config_list_from_models
---------
Co-authored-by: Ward <award40@LAMU0CLP74YXVX6.uhc.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: Li Jiang <bnujli@gmail.com>
* LMM notebook
* Use "register_reply" instead.
* Loop check LLaVA non-empty response
* Run notebook
* Make the llava_call function more flexible
* Include API for LLaVA from Replicate
* LLaVA data format update x2
1. prompt formater function
2. conversation format with SEP
* Coding example added
* Rename "ImageAgent" -> "LLaVAAgent"
* Docstring and comments updates
* Debug notebook: Remote LLaVA tested
* Example 1: remove system message
* MultimodalConversableAgent added
* Add gpt4v_formatter
* LLaVA: update example 1
* LLaVA: Add link to "Table of Content"
* Updating Examples to follow new categorical structure. #273
Addressing the remaining task for #273, I have copied over the changes from /Usecases to /Examples to follow the new categorical example notebooks structure.
* Add the new example notebook
---------
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>