Files
OpenHands/evaluation
Robert 7fc57650f3 BioCoder integration (#2076)
* prepare execution and inference

* Create README.md

* Update README.md

* Update evaluation/biocoder/README.md

* Update evaluation/swe_bench/swe_env_box.py

* switch to biocoder docker container and test-specific code

* code for copying and running test files into container

* add metrics

* add readme

* Biocoder evaluation code finished (rewrite testing infrastructure, prompt tuning, and bug fixes)

* Update README.md

---------

Co-authored-by: lilbillybiscuit <qianbill2014@outlook.com>
Co-authored-by: Yufan Song <33971064+yufansong@users.noreply.github.com>
Co-authored-by: yufansong <yufan@risingwave-labs.com>
2024-06-10 11:11:40 +08:00
..
2024-06-09 12:57:58 -07:00
2024-06-10 11:11:40 +08:00
2024-06-03 16:04:34 +00:00
2024-06-09 12:57:58 -07:00
2024-06-04 06:36:19 +00:00
2024-06-09 12:57:58 -07:00
2024-06-09 12:57:58 -07:00
2024-06-09 12:57:58 -07:00
2024-06-09 12:57:58 -07:00
2024-06-10 11:11:40 +08:00
2024-06-09 12:57:58 -07:00
2024-06-09 12:57:58 -07:00
2024-05-20 14:40:31 +00:00

Evaluation

This folder contains code and resources to run experiments and evaluations.

Logistics

To better organize the evaluation folder, we should follow the rules below:

  • Each subfolder contains a specific benchmark or experiment. For example, evaluation/swe_bench should contain all the preprocessing/evaluation/analysis scripts.
  • Raw data and experimental records should not be stored within this repo.
  • Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.

Supported Benchmarks

Result Visualization

Check this huggingface space for visualization of existing experimental results.

Upload your results

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.