* docker documentation
* docker doc
* clean contribute.md
* minor change
* Add more detailed description
* add docker instructions
* more dockerfiles
* readme update
* latest python
* dev docker python version
* add version
* readme
* improve doc
* improve doc
* path name
* naming
* Update website/docs/Installation.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update website/docs/Installation.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Add suggestion to install colima for Mac users
* Update website/docs/Installation.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update website/docs/Installation.md
Co-authored-by: olgavrou <olgavrou@gmail.com>
* update doc
* typo
* improve doc
* add more options in dev file
* contrib
* add link to doc
* add link
* Update website/docs/Installation.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update website/docs/Installation.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* instruction
* Update website/docs/FAQ.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* FAQ
* comment autogen studio
---------
Co-authored-by: Yuandong Tian <yuandong@fb.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: olgavrou <olgavrou@gmail.com>
* Update chat_with_teachable_agent.py to v2.
* Update agentchat_teachability.ipynb to v2.
* Add test of teachability accuracy.
* Update installation instructions.
* Add to contrib tests.
* pre-commit fixes
* Apply reviewer suggestions to test workflows.
* LMM Code added
* LLaVA notebook update
* Test cases and Notebook modified for OpenAI v1
* Move LMM into contrib
To resolve test issues and deploy issues
In the future, we can install pillow by default, and then move back
LMM agents into agentchat
* LMM test setup update
* try...except... clause for LMM tests
* disable patch for llava agent test
To resolve dependencies issue for build
* Add LMM Blog
* Change docstring for LMM agents
* Docstring update patch
* llava: insert reply at position 1 now
So, it can still handle human_input_mode
and max_consecutive_reply
* Resolve comments
Fixing: typos, blogs, yml, and add OpenAIWrapper
* Signature typo fix for LMM agent: system_message
* Update LMM "content" from latest OpenAI release
Reference https://platform.openai.com/docs/guides/vision
* update LMM test according to latest OpenAI release
* Fully support GPT-4V now
1. Add a notebook for GPT-4V. LLava notebook also updated.
2. img_utils updated
3. GPT-4V formatter now return base64 image with mime type
4. Infer mime type directly from b64 image content (while loading
without suffix)
5. Test cases modified according to all the related changes.
* GPT-4V link updated in blog
---------
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update Installation.md
Replace autogen->pyautogen in env setup to avoid confusion
Related issue: #211
* Update Installation.md
Add deactivation instructions
* Update website/docs/Installation.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
---------
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* docs: added virtual environment setup process
* Update website/docs/Installation.md
---------
Co-authored-by: Li Jiang <bnujli@gmail.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update Installation.md
Usage and importance of Docker is explained in more precise and to the point
* Update website/docs/Installation.md
---------
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
* support latest xgboost version
* Update test_classification.py
* Update
Exists problems when installing xgb1.6.1 in py3.6
* cleanup
* xgboost version
* remove time_budget_s in test
* remove redundancy
* stop support of python 3.6
Co-authored-by: zsk <shaokunzhang529@gmail.com>
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>