mirror of
https://github.com/All-Hands-AI/OpenHands.git
synced 2026-04-29 03:00:45 -04:00
* reset workspace base properly * support running without hint * support running without hint * bump swe-bench eval docker to v1.2 for latest agentskills * only give hint when use hint text is trie * add swe-agent instructions for validation * update dockerfile * pin the python interpreter for execute_cli * avoid initialize plugins twice * default to use hint * save results to swe_bench_lite * unset gh token and increase max iter to 50 * remove printing of use hint status * refractor ssh login into one function * ok drop to 30 turns bc it is so expensive :( * remove reproduce comments to avoid stuck
Evaluation
This folder contains code and resources to run experiments and evaluations.
Logistics
To better organize the evaluation folder, we should follow the rules below:
- Each subfolder contains a specific benchmark or experiment. For example,
evaluation/swe_benchshould contain all the preprocessing/evaluation/analysis scripts. - Raw data and experimental records should not be stored within this repo.
- For model outputs, they should be stored at this huggingface space for visualization.
- Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.
Supported Benchmarks
- SWE-Bench:
evaluation/swe_bench - HumanEvalFix:
evaluation/humanevalfix - GAIA:
evaluation/gaia - Entity deduction Arena (EDA):
evaluation/EDA
Result Visualization
Check this huggingface space for visualization of existing experimental results.
Upload your results
You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.