* (feat) making prompt caching optional instead of enabled default
At present, only the Claude models support prompt caching as a experimental feature, therefore, this feature should be implemented as an optional setting rather than being enabled by default.
Signed-off-by: Yi Lin <teroincn@gmail.com>
* handle the conflict
* fix unittest mock return value
* fix lint error in whitespace
---------
Signed-off-by: Yi Lin <teroincn@gmail.com>
* Update docs on LLM providers for consistency
* Update headless command
* minor tweaks based on feedback
---------
Co-authored-by: Robert Brennan <contact@rbren.io>
Co-authored-by: Robert Brennan <accounts@rbren.io>
* update badges
* fix badges
* better badges
* move credits
* more badge work
* add gh logo
* update some copy
* update logo
* fix height
* update text
* emdash
* remove cruft
* move title
* update links
* add hr
* white logo
* move some stuff to getting-started
* revert logo
* more copy changes
* minor tweaks
* fix sidebar
* explicit sidebar
* words
* fix tag
* fix how-to
* more docs work
* update styles
* fix up custom sandbox docs
* change eval title
* fix up getting-started
* fix getting started
* update to 0.9.2
* update screenshot
* add company link
* fix dark mode
* minor fixes
* update image
* update headless and cli docs
* update readme
* fix links
* revert package
* rename links
* fix links
* fix link
* chagne to claude
* update badges
* fix badges
* better badges
* move credits
* more badge work
* add gh logo
* update some copy
* update logo
* fix height
* update text
* emdash
* remove cruft
* move title
* update links
* add hr
* white logo
* move some stuff to getting-started
* revert logo
* more copy changes
* minor tweaks
* words
* fix getting started
* update to 0.9.2
* update screenshot
* minor fixes
* CodeActAgent: fix message prep if prompt caching is not supported
* fix python version in regen tests workflow
* fix in conftest "mock_completion" method
* add disable_vision to LLMConfig; revert change in message parsing in llm.py
* format messages in several files for completion
* refactored message(s) formatting (llm.py); added vision_is_active()
* fix a unit test
* regenerate: added LOG_TO_FILE and FORCE_REGENERATE env flags
* try to fix path to logs folder in workflow
* llm: prevent index error
* try FORCE_USE_LLM in regenerate
* tweaks everywhere...
* fix 2 random unit test errors :(
* added FORCE_REGENERATE_TESTS=true to regenerate CLI
* fix test_lint_file_fail_typescript again
* double-quotes for env vars in workflow; llm logger set to debug
* fix typo in regenerate
* regenerate iterations now 20; applied iteration counter fix by Li
* regenerate: pass FORCE_REGENERATE flag into env
* fixes for int tests. several mock files updated.
* browsing_agent: fix response_parser.py adding ) to empty response
* test_browse_internet: fix skipif and revert obsolete mock files
* regenerate: fi bracketing for http server start/kill conditions
* disable test_browse_internet for CodeAct*Agents; mock files updated after merge
* missed to include more mock files earlier
* reverts after review feedback from Li
* forgot one
* browsing agent test, partial fixes and updated mock files
* test_browse_internet works in my WSL now!
* adapt unit test test_prompt_caching.py
* add DEBUG to regenerate workflow command
* convert regenerate workflow params to inputs
* more integration test mock files updated
* more files
* test_prompt_caching: restored test_prompt_caching_headers purpose
* file_ops: fix potential exception, like "cross device copy"; fixed mock files accordingly
* reverts/changes wrt feedback from xingyao
* updated docs and config template
* code cleanup wrt review feedback
* renaming more opendevin occurences
* remove DOCKER_IMAGE variable from Makefile
* Revert rename in evaluation/swe_bench/run_infer.py
Co-authored-by: Xingyao Wang <xingyao@all-hands.dev>
---------
Co-authored-by: Xingyao Wang <xingyao@all-hands.dev>