* adding draft evaluation code for EDA, using chatgpt as the temporal agent for now * Update README.md * Delete frontend/package.json * reverse the irrelevant changes * reverse package.json * use chatgpt as the codeactagent * integrate with opendevin * Update evaluation/EDA/README.md * Update evaluation/EDA/README.md * Use poetry to manage packages * integrate with opendevin * minor update * minor update * update poetry * update README * clean-up infer scripts * add run_infer script and improve readme * log final success and final message & ground truth --------- Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Xingyao Wang <xingyao6@illinois.edu> Co-authored-by: yufansong <yufan@risingwave-labs.com> Co-authored-by: Boxuan Li <liboxuan@connect.hku.hk>
EDA Evaluation
This folder contains evaluation harness for evaluating agents on the Entity-deduction-Arena Benchmark, from the paper Probing the Multi-turn Planning Capabilities of LLMs via 20 Question Games, presented in ACL 2024 main conference.
Configure OpenDevin and your LLM
Create a config.toml file if it does not exist at the root of the workspace. Please check README.md for how to set this up.
Start the evaluation
export OPENAI_API_KEY="sk-XXX"; # This is required for evaluation (to simulate another party of conversation)
./evaluation/EDA/scripts/run_infer.sh [model_config] [agent] [dataset] [eval_limit]
where model_config is mandatory, while agent, dataset and eval_limit are optional.
-
model_config, e.g.eval_gpt4_1106_preview, is the config group name for your LLM settings, as defined in yourconfig.toml. -
agent, e.g.CodeActAgent, is the name of the agent for benchmarks, defaulting toCodeActAgent. -
dataset: There are two tasks in this evaluation. Specifydatasetto test on eitherthingsorcelebstask. -
eval_limit, e.g.10, limits the evaluation to the firsteval_limitinstances. By default it infers all instances.
Let's say you'd like to run 10 instances using eval_gpt4_1106_eval_gpt4o_2024_05_13preview and CodeActAgent,
then your command would be:
./evaluation/EDA/scripts/run_infer.sh eval_gpt4o_2024_05_13 CodeActAgent things
Reference
@inproceedings{zhang2023entity,
title={Probing the Multi-turn Planning Capabilities of LLMs via 20 Question Games},
author={Zhang, Yizhe and Lu, Jiarui and Jaitly, Navdeep},
journal={ACL},
year={2024}
}