Files
tobitege 9c39f07430 (enh) Aider-Bench: make resumable with skip_num arg (#3626)
* added optional START_ID env flag to resume from that instance id

* prepare_dataset: fix comparisons by using instance id's as int

* aider bench complete_runtime: close runtime to close container

* added matrix display of instance id for logging

* fix typo in summarize_results.py saying summarise_results

* changed start_id to skip_num to skip rows from dataset (start_id wasn't supportable)

* doc changes about huggingface spaces to temporarily point back to OD
2024-08-28 15:42:01 +00:00
..

WebArena Evaluation with OpenHands Browsing Agents

This folder contains evaluation for MiniWoB++ benchmark, powered by BrowserGym for easy evaluation of how well an agent capable of browsing can perform on synthetic web browsing tasks.

Setup Environment and LLM Configuration

Please follow instruction here to setup your local development environment and LLM.

Test if your environment works

Access with browser the above MiniWoB URLs and see if they load correctly.

Run Evaluation

./evaluation/miniwob/scripts/run_infer.sh llm.claude-35-sonnet-eval

Results will be in evaluation/evaluation_outputs/outputs/miniwob/

To calculate the average reward, run:

poetry run python evaluation/miniwob/get_success_rate.py evaluation/evaluation_outputs/outputs/miniwob/SOME_AGENT/EXP_NAME/output.jsonl

Submit your evaluation results

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results following the guide here.

BrowsingAgent V1.0 result

Tested on BrowsingAgent V1.0

MiniWoB++, 125 tasks (3 runs due to random init task), max step 10

  • GPT4o: 0.384, 0.416, 0.424, avg: 0.408
  • GPT3.5: 0.288, 0.256, 0.272, avg: 0.272