Files
OpenHands/evaluation/gorilla
yueqis 68d9ad61cf Feat: Support Gorilla APIBench (#2081)
* removed unused files from gorilla

* Update run_infer.py, removed unused imports

* Update utils.py

* Update ast_eval_hf.py

* Update ast_eval_tf.py

* Update ast_eval_th.py

* Create README.md

* Update run_infer.py

* make lint

* Update run_infer.py

* fix lint

---------

Co-authored-by: yufansong <yufan@risingwave-labs.com>
2024-06-08 16:54:54 +00:00
..

Gorilla APIBench Evaluation with OpenDevin

This folder contains evaluation harness we built on top of the original Gorilla APIBench (paper).

Setup Environment

Please follow this document to setup local development environment for OpenDevin.

Configure OpenDevin and your LLM

Run make setup-config to set up the config.toml file if it does not exist at the root of the workspace.

Run Inference on APIBench Instances

Make sure your Docker daemon is running, then run this bash script:

bash evaluation/gorilla/scripts/run_infer.sh [model_config] [agent] [eval_limit] [hubs]

where model_config is mandatory, while all other arguments are optional.

model_config, e.g. llm, is the config group name for your LLM settings, as defined in your config.toml.

agent, e.g. CodeActAgent, is the name of the agent for benchmarks, defaulting to CodeActAgent.

eval_limit, e.g. 10, limits the evaluation to the first eval_limit instances. By default, the script evaluates 1 instance.

hubs, the hub from APIBench to evaluate from. You could choose one or more from torch or th (which is abbreviation of torch), hf (which is abbreviation of huggingface), and tf (which is abbreviation of tensorflow), for hubs. The default is hf,torch,tf.

Note: in order to use eval_limit, you must also set agent; in order to use hubs, you must also set eval_limit.

Let's say you'd like to run 10 instances using llm and CodeActAgent on th test, then your command would be:

bash evaluation/gorilla/scripts/run_infer.sh llm CodeActAgent 10 th