* Initial commit.
* Disable LLM response caching.
* Add teachability option to setup.py
* Modify test to use OAI_CONFIG_LIST as suggested in the docs.
* Expand unit test.
* Complete unit test.
* Add filter_dict
* details
* AnalysisAgent
* details
* More documentation and debug output.
* Support retrieval of any number of relevant memos, including zero.
* More robust analysis separator.
* cleanup
* teach_config
* refactoring
* For robustness, allow more flexibility on memo storage and retrieval.
* de-dupe the retrieved memos.
* Simplify AnalysisAgent. The unit tests now pass with gpt-3.5
* comments
* Add a verbosity level to control analyzer messages.
* refactoring
* comments
* Persist memory on disk.
* cleanup
* Use markdown to format retrieved memos.
* Use markdown in TextAnalyzerAgent
* Add another verbosity level.
* clean up logging
* notebook
* minor edits
* cleanup
* linter fixes
* Skip tests that fail to import openai
* Address reviewer feedback.
* lint
* refactoring
* Improve wording
* Improve code coverage.
* lint
* Use llm_config to control caching.
* lowercase notebook name
* Sort out the parameters passed through to ConversableAgent, and supply full docstrings for the others.
* lint
* Allow TextAnalyzerAgent to be given a different llm_config than TeachableAgent.
* documentation
* Modifications to run openai workflow.
* Test on just python 3.10.
Replace agent with agent teachable_agent as recommended.
* Test on python 3.9 instead of 3.10.
* Remove space from name -> teachableagent
---------
Co-authored-by: Li Jiang <bnujli@gmail.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Add custom embedding function
* Add support to custom vector db
* Improve docstring
* Improve docstring
* Improve docstring
* Add support to customized is_termination_msg fucntion
* Add a test for customize vector db with lancedb
* Fix tests
* Add test for embedding_function
* Update docstring
* update colab link
* typo
* upload file instruction
* update system message and notebooks
* update notebooks
* notebook test
* aoai api version and exclusion
* gpt-3.5-turbo
* dict check
* change model for test
* endpoints, cache_path and func description update
* model list
* gitter -> discord
* response filter
* rewrite implement based on the filter
* multi responses
* abs path
* code handling
* option to not use docker
* context
* eval_only -> raise_error
* notebook
* utils
* utils
* separate tests
* test
* test
* test
* test
* test
* test
* test
* test
* **config in test()
* test
* test
* filename
* Improve test by removing unnecessary environment variable
* Fix PULL_REQUEST_TEMPLATE
* Hide pre-commit check
* remove the checkbox for pre-commit
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
---------
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* improve max_valid_n and doc
* Update README.md
Co-authored-by: Li Jiang <lijiang1@microsoft.com>
* add support for chatgpt
* notebook
* newline at end of file
* chatgpt notebook
* ChatGPT in Azure
* doc
* math
* warning, timeout, log file name
* handle import error
* doc update; default value
* paper
* doc
* docstr
* eval_func
* add a test func in completion
* update notebook
* update math notebook
* improve notebok
* lint and handle exception
* flake8
* exception in test
* add agg_method
* NameError
* refactor
* Update flaml/integrations/oai/completion.py
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update flaml/integrations/oai/completion.py
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* add example
* merge files from oai_eval_test
* Revert "merge files from oai_eval_test"
This reverts commit 1e6a550f913bb94df6e9680934ccb7175d00702e.
* merge
* save results to notebook_output
* update version and cache
* update doc
* save nb cell results to file
* fix typo in model name
* code improvements
* improve docstr
* docstr
* docstr on the Returns of test
---------
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
Co-authored-by: Li Jiang <lijiang1@microsoft.com>
Co-authored-by: Susan Xueqing Liu <liususan091219@users.noreply.github.com>
* add basic support to Spark dataframe
add support to SynapseML LightGBM model
update to pyspark>=3.2.0 to leverage pandas_on_Spark API
* clean code, add TODOs
* add sample_train_data for pyspark.pandas dataframe, fix bugs
* improve some functions, fix bugs
* fix dict change size during iteration
* update model predict
* update LightGBM model, update test
* update SynapseML LightGBM params
* update synapseML and tests
* update TODOs
* Added support to roc_auc for spark models
* Added support to score of spark estimator
* Added test for automl score of spark estimator
* Added cv support to pyspark.pandas dataframe
* Update test, fix bugs
* Added tests
* Updated docs, tests, added a notebook
* Fix bugs in non-spark env
* Fix bugs and improve tests
* Fix uninstall pyspark
* Fix tests error
* Fix java.lang.OutOfMemoryError: Java heap space
* Fix test_performance
* Update test_sparkml to test_0sparkml to use the expected spark conf
* Remove unnecessary widgets in notebook
* Fix iloc java.lang.StackOverflowError
* fix pre-commit
* Added params check for spark dataframes
* Refactor code for train_test_split to a function
* Update train_test_split_pyspark
* Refactor if-else, remove unnecessary code
* Remove y from predict, remove mem control from n_iter compute
* Update workflow
* Improve _split_pyspark
* Fix test failure of too short training time
* Fix typos, improve docstrings
* Fix index errors of pandas_on_spark, add spark loss metric
* Fix typo of ndcgAtK
* Update NDCG metrics and tests
* Remove unuseful logger
* Use cache and count to ensure consistent indexes
* refactor for merge maain
* fix errors of refactor
* Updated SparkLightGBMEstimator and cache
* Updated config2params
* Remove unused import
* Fix unknown parameters
* Update default_estimator_list
* Add unit tests for spark metrics
* add cost budget; move loc of make_dir
* support openai completion
* install pytest in workflow
* skip openai test
* test openai
* path for docs rebuild
* install datasets
* signal
* notebook
* notebook in workflow
* optional arguments and special params
* key -> k
* improve readability
* assumption
* optimize for model selection
* larger range of max_tokens
* notebook
* python package workflow
* skip on win