* CodeActAgent: fix message prep if prompt caching is not supported * fix python version in regen tests workflow * fix in conftest "mock_completion" method * add disable_vision to LLMConfig; revert change in message parsing in llm.py * format messages in several files for completion * refactored message(s) formatting (llm.py); added vision_is_active() * fix a unit test * regenerate: added LOG_TO_FILE and FORCE_REGENERATE env flags * try to fix path to logs folder in workflow * llm: prevent index error * try FORCE_USE_LLM in regenerate * tweaks everywhere... * fix 2 random unit test errors :( * added FORCE_REGENERATE_TESTS=true to regenerate CLI * fix test_lint_file_fail_typescript again * double-quotes for env vars in workflow; llm logger set to debug * fix typo in regenerate * regenerate iterations now 20; applied iteration counter fix by Li * regenerate: pass FORCE_REGENERATE flag into env * fixes for int tests. several mock files updated. * browsing_agent: fix response_parser.py adding ) to empty response * test_browse_internet: fix skipif and revert obsolete mock files * regenerate: fi bracketing for http server start/kill conditions * disable test_browse_internet for CodeAct*Agents; mock files updated after merge * missed to include more mock files earlier * reverts after review feedback from Li * forgot one * browsing agent test, partial fixes and updated mock files * test_browse_internet works in my WSL now! * adapt unit test test_prompt_caching.py * add DEBUG to regenerate workflow command * convert regenerate workflow params to inputs * more integration test mock files updated * more files * test_prompt_caching: restored test_prompt_caching_headers purpose * file_ops: fix potential exception, like "cross device copy"; fixed mock files accordingly * reverts/changes wrt feedback from xingyao * updated docs and config template * code cleanup wrt review feedback
Welcome to OpenHands, a platform for autonomous software engineers, powered by AI and LLMs (previously called "OpenDevin").
OpenHands agents collaborate with human developers to write code, fix bugs, and ship features.
⚡ Getting Started
OpenHands works best with Docker version 26.0.0+ (Docker Desktop 4.31.0+). You must be using Linux, Mac OS, or WSL on Windows.
To start OpenHands in a docker container, run the following commands in your terminal:
Warning
When you run the following command, files in
./workspacemay be modified or deleted.
WORKSPACE_BASE=$(pwd)/workspace
docker run -it \
--pull=always \
-e SANDBOX_USER_ID=$(id -u) \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-v $WORKSPACE_BASE:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app-$(date +%Y%m%d%H%M%S) \
ghcr.io/all-hands-ai/openhands:0.9
Note
This command pulls the
0.9tag, which represents the most recent stable release of OpenHands. You have other options as well:
- For a specific release version, use
ghcr.io/all-hands-ai/openhands:<OpenHands_version>(replace <OpenHands_version> with the desired version number).- For the most up-to-date development version, use
ghcr.io/all-hands-ai/openhands:main. This version may be (unstable!) and is recommended for testing or development purposes only.Choose the tag that best suits your needs based on stability requirements and desired features.
You'll find OpenHands running at http://localhost:3000 with access to ./workspace. To have OpenHands operate on your code, place it in ./workspace.
OpenHands will only have access to this workspace folder. The rest of your system will not be affected as it runs in a secured docker sandbox.
Upon opening OpenHands, you must select the appropriate Model and enter the API Key within the settings that should pop up automatically. These can be set at any time by selecting
the Settings button (gear icon) in the UI. If the required Model does not exist in the list, you can manually enter it in the text box.
For the development workflow, see Development.md.
Are you having trouble? Check out our Troubleshooting Guide.
🚀 Documentation
To learn more about the project, and for tips on using OpenHands, check out our documentation.
There you'll find resources on how to use different LLM providers (like ollama and Anthropic's Claude), troubleshooting resources, and advanced configuration options.
🤝 How to Contribute
OpenHands is a community-driven project, and we welcome contributions from everyone. Whether you're a developer, a researcher, or simply enthusiastic about advancing the field of software engineering with AI, there are many ways to get involved:
- Code Contributions: Help us develop new agents, core functionality, the frontend and other interfaces, or sandboxing solutions.
- Research and Evaluation: Contribute to our understanding of LLMs in software engineering, participate in evaluating the models, or suggest improvements.
- Feedback and Testing: Use the OpenHands toolset, report bugs, suggest features, or provide feedback on usability.
For details, please check CONTRIBUTING.md.
🤖 Join Our Community
Whether you're a developer, a researcher, or simply enthusiastic about OpenHands, we'd love to have you in our community. Let's make software engineering better together!
- Slack workspace - Here we talk about research, architecture, and future development.
- Discord server - This is a community-run server for general discussion, questions, and feedback.
📈 Progress
📜 License
Distributed under the MIT License. See LICENSE for more information.
🙏 Acknowledgements
OpenHands is built by a large number of contributors, and every contribution is greatly appreciated! We also build upon other open source projects, and we are deeply thankful for their work.
For a list of open source projects and licenses used in OpenHands, please see our CREDITS.md file.
📚 Cite
@misc{opendevin,
title={{OpenDevin: An Open Platform for AI Software Developers as Generalist Agents}},
author={Xingyao Wang and Boxuan Li and Yufan Song and Frank F. Xu and Xiangru Tang and Mingchen Zhuge and Jiayi Pan and Yueqi Song and Bowen Li and Jaskirat Singh and Hoang H. Tran and Fuqiang Li and Ren Ma and Mingzhang Zheng and Bill Qian and Yanjun Shao and Niklas Muennighoff and Yizhe Zhang and Binyuan Hui and Junyang Lin and Robert Brennan and Hao Peng and Heng Ji and Graham Neubig},
year={2024},
eprint={2407.16741},
archivePrefix={arXiv},
primaryClass={cs.SE},
url={https://arxiv.org/abs/2407.16741},
}

