Files
OpenHands/evaluation
Leo be251b11de Add AgentBench. (#2012)
* Add AgentBench.

* Load the datasets from HF.

Signed-off-by: ifuryst <ifuryst@gmail.com>

* Add helper functions.

* Add mock executor.

Signed-off-by: ifuryst <ifuryst@gmail.com>

* Add retriv agent answer cmd.

* Adjust the dataset.
* Refine test results.

Signed-off-by: ifuryst <ifuryst@gmail.com>

* Consolidate all AgentBench datasets and scripts into a single CSV dataset.

* Refactor dataset source.
* Update helper functions.

Signed-off-by: ifuryst <ifuryst@gmail.com>

* Fix the CRLF problem.

Signed-off-by: ifuryst <ifuryst@gmail.com>

* Separate the instance's workspace.

Signed-off-by: ifuryst <ifuryst@gmail.com>

* Add cleanup logic and error handling for sandbox closure.

* Normalized dataset

Signed-off-by: ifuryst <ifuryst@gmail.com>

* Update README.

Signed-off-by: ifuryst <ifuryst@gmail.com>

* Update the prompt to capture the answer.

Signed-off-by: ifuryst <ifuryst@gmail.com>

* Refactor script execution paths to use absolute container workspace path.

Signed-off-by: ifuryst <ifuryst@gmail.com>

* Update AgentBench README.

Signed-off-by: ifuryst <ifuryst@gmail.com>

* Delete useless functions.

Signed-off-by: ifuryst <ifuryst@gmail.com>

* Update evaluation/agent_bench/README.md

* Add script to summarize test results from JSONL file in AgentBench

Signed-off-by: ifuryst <ifuryst@gmail.com>

* Delete useless script and codes.

Signed-off-by: ifuryst <ifuryst@gmail.com>

* Update evaluation/agent_bench/scripts/summarise_results.py

---------

Signed-off-by: ifuryst <ifuryst@gmail.com>
Co-authored-by: Boxuan Li <liboxuan@connect.hku.hk>
2024-06-01 07:58:14 +00:00
..
2024-06-01 07:58:14 +00:00
2024-05-25 15:01:03 +00:00
2024-05-20 14:40:31 +00:00

Evaluation

This folder contains code and resources to run experiments and evaluations.

Logistics

To better organize the evaluation folder, we should follow the rules below:

  • Each subfolder contains a specific benchmark or experiment. For example, evaluation/swe_bench should contain all the preprocessing/evaluation/analysis scripts.
  • Raw data and experimental records should not be stored within this repo.
  • Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.

Supported Benchmarks

Result Visualization

Check this huggingface space for visualization of existing experimental results.

Upload your results

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.