* dummy test change
* regen yml: 1st install python 3.11, then poetry
* fix caching for poetry; old entry for python was rather useless
* fix steps order (cache before poetry)
* add poetry caching to ghcr_runtime; fix fork conditions
* ghcr_runtime: more caching actions; condition fixes
* fix interim action error (order of steps)
* cache@v4 instead of v3
* fixed interim typo for 2 fork conditions
* runtime/test_env_vars: compacted multiple tests into one to reduce time
* ugh if fork condition changes again
* (feat) making prompt caching optional instead of enabled default
At present, only the Claude models support prompt caching as a experimental feature, therefore, this feature should be implemented as an optional setting rather than being enabled by default.
Signed-off-by: Yi Lin <teroincn@gmail.com>
* handle the conflict
* fix unittest mock return value
* fix lint error in whitespace
---------
Signed-off-by: Yi Lin <teroincn@gmail.com>
* Update docs on LLM providers for consistency
* Update headless command
* minor tweaks based on feedback
---------
Co-authored-by: Robert Brennan <contact@rbren.io>
Co-authored-by: Robert Brennan <accounts@rbren.io>
* feat: add SWE-bench fullset support
* fix instance image list
* update eval script and documentation
* increase timeout for remote runtime
* add push script
* handle the case when ret push is an generator
* update pbar
* set SWE-Bench default to run SWE-Bench lite
* add script to cleanup remote runtime
* fix the cases when tag is too long
* update README
* update readme for cleanup
* rename od to oh
* Update evaluation/swe_bench/README.md
Co-authored-by: Graham Neubig <neubig@gmail.com>
* Update evaluation/swe_bench/README.md
Co-authored-by: Graham Neubig <neubig@gmail.com>
* Update evaluation/swe_bench/scripts/cleanup_remote_runtime.sh
Co-authored-by: Graham Neubig <neubig@gmail.com>
* Update evaluation/swe_bench/scripts/cleanup_remote_runtime.sh
Co-authored-by: Graham Neubig <neubig@gmail.com>
* Update evaluation/swe_bench/scripts/cleanup_remote_runtime.sh
Co-authored-by: Graham Neubig <neubig@gmail.com>
* gets API key and Runtime from env var
---------
Co-authored-by: Graham Neubig <neubig@gmail.com>
* update badges
* fix badges
* better badges
* move credits
* more badge work
* add gh logo
* update some copy
* update logo
* fix height
* update text
* emdash
* remove cruft
* move title
* update links
* add hr
* white logo
* move some stuff to getting-started
* revert logo
* more copy changes
* minor tweaks
* fix sidebar
* explicit sidebar
* words
* fix tag
* fix how-to
* more docs work
* update styles
* fix up custom sandbox docs
* change eval title
* fix up getting-started
* fix getting started
* update to 0.9.2
* update screenshot
* add company link
* fix dark mode
* minor fixes
* update image
* update headless and cli docs
* update readme
* fix links
* revert package
* rename links
* fix links
* fix link
* chagne to claude
* Add documentation for CLI mode
Fixes#3703
Add documentation for CLI mode in OpenHands.
* **New Documentation**: Add `docs/modules/usage/how-to/cli-mode.md` to document CLI mode.
- Include instructions on starting an interactive OpenHands session via the command line.
- Explain the difference between CLI mode and headless mode.
- Provide examples of CLI commands and expected outputs.
* **Update Existing Documentation**: Modify `docs/modules/usage/how-to/headless-mode.md`.
- Clarify the difference between headless mode and CLI mode.
- Add a reference to the new CLI mode documentation.
* Update cli-mode.md
* Update headless-mode.md
---------
Co-authored-by: Robert Brennan <accounts@rbren.io>
Co-authored-by: tofarr <tofarr@gmail.com>
* update badges
* fix badges
* better badges
* move credits
* more badge work
* add gh logo
* update some copy
* update logo
* fix height
* update text
* emdash
* remove cruft
* move title
* update links
* add hr
* white logo
* move some stuff to getting-started
* revert logo
* more copy changes
* minor tweaks
* words
* fix getting started
* update to 0.9.2
* update screenshot
* minor fixes
* CodeActAgent: fix message prep if prompt caching is not supported
* fix python version in regen tests workflow
* fix in conftest "mock_completion" method
* add disable_vision to LLMConfig; revert change in message parsing in llm.py
* format messages in several files for completion
* refactored message(s) formatting (llm.py); added vision_is_active()
* fix a unit test
* regenerate: added LOG_TO_FILE and FORCE_REGENERATE env flags
* try to fix path to logs folder in workflow
* llm: prevent index error
* try FORCE_USE_LLM in regenerate
* tweaks everywhere...
* fix 2 random unit test errors :(
* added FORCE_REGENERATE_TESTS=true to regenerate CLI
* fix test_lint_file_fail_typescript again
* double-quotes for env vars in workflow; llm logger set to debug
* fix typo in regenerate
* regenerate iterations now 20; applied iteration counter fix by Li
* regenerate: pass FORCE_REGENERATE flag into env
* fixes for int tests. several mock files updated.
* browsing_agent: fix response_parser.py adding ) to empty response
* test_browse_internet: fix skipif and revert obsolete mock files
* regenerate: fi bracketing for http server start/kill conditions
* disable test_browse_internet for CodeAct*Agents; mock files updated after merge
* missed to include more mock files earlier
* reverts after review feedback from Li
* forgot one
* browsing agent test, partial fixes and updated mock files
* test_browse_internet works in my WSL now!
* adapt unit test test_prompt_caching.py
* add DEBUG to regenerate workflow command
* convert regenerate workflow params to inputs
* more integration test mock files updated
* more files
* test_prompt_caching: restored test_prompt_caching_headers purpose
* file_ops: fix potential exception, like "cross device copy"; fixed mock files accordingly
* reverts/changes wrt feedback from xingyao
* updated docs and config template
* code cleanup wrt review feedback
* Catch exception and return finish action with an exception message in case of exception in llm completion
* Remove exception logs
* Raise llm response error for any exception in llm completion
* Raise LLMResponseError from async completion and async streaming completion as well
* feat: add SWE-bench fullset support
* fix instance image list
* update eval script and documentation
* increase timeout for remote runtime
* add push script
* handle the case when ret push is an generator
* update pbar
* set SWE-Bench default to run SWE-Bench lite
* feat: add SWE-bench fullset support
* fix instance image list
* update eval script and documentation
* add push script
* handle the case when ret push is an generator
* update pbar