mirror of
https://github.com/All-Hands-AI/OpenHands.git
synced 2026-01-08 22:38:05 -05:00
* add draft dockerfile for build all * add rsync for build * add all-in-one docker * update prepare scripts * Update swe_env_box.py * Add swe_entry.sh (buggy now) * Parse the test command in swe_entry.sh * Update README for instance eval in sandbox * revert specialized config * replace run_as_devin as an init arg * set container & run_as_root via args * update swe entry script * update env * remove mounting * allow error after swe_entry * update swe_env_box * move file * update gitignore * get swe_env_box a working demo * support faking user response & provide sandox ahead of time; also return state for controller * tweak main to support adding controller kwargs * add module * initialize plugin for provided sandbox * add pip cache to plugin & fix jupyter kernel waiting * better print Observation output * add run infer scripts * update readme * add utility for getting diff patch * use get_diff_patch in infer * update readme * support cost tracking for codeact * add swe agent edit hack * disable color in git diff * fix git diff cmd * fix state return * support limit eval * increase t imeout and export pip cache * add eval limit config * return state when hit turn limit * save log to file; allow agent to give up * run eval with max 50 turns * add outputs to gitignore * save swe_instance & instruction * add uuid to swebench * add streamlit dep * fix save series * fix the issue where session id might be duplicated * allow setting temperature for llm (use 0 for eval) * Get report from agent running log * support evaluating task success right after inference. * remove extra log * comment out prompt for baseline * add visualizer for eval * use plaintext for instruction * reduce timeout for all; only increase timeout for init * reduce timeout for all; only increase timeout for init * ignore sid for swe env * close sandbox in each eval loop * update visualizer instruction * increase max chars * add finish action to history too * show test result in metrics * add sidebars for visualizer * also visualize swe_instance * cleanup browser when agent controller finish runinng * do not mount workspace for swe-eval to avoid accidentally overwrite files * Revert "do not mount workspace for swe-eval to avoid accidentally overwrite files" This reverts commit8ef7739054. * Revert "Revert "do not mount workspace for swe-eval to avoid accidentally overwrite files"" This reverts commit016cfbb9f0. * run jupyter command via copy to, instead of cp to mount * only print mixin output when failed * change ssh box logging * add visualizer for pass rate * add instance id to sandbox name * only remove container we created * use opendevin logger in main * support multi-processing infer * add back metadata, support keyboard interrupt * remove container with startswith * make pbar behave correctly * update instruction w/ multi-processing * show resolved rate by repo * rename tmp dir name * attempt to fix racing for copy to ssh_box * fix script * bump swe-bench-all version * fix ipython with self-contained commands * add jupyter demo to swe_env_box * make resolved count two column * increase height * do not add glob to url params * analyze obs length * print instance id prior to removal handler * add gold patch in visualizer * fix interactive git by adding a git --no-pager as alias * increase max_char to 10k to cover 98% of swe-bench obs cases * allow parsing note * prompt v2 * add iteration reminder * adjust user response * adjust order * fix return eval * fix typo * add reminder before logging * remove other resolve rate * re adjust to new folder structure * support adding eval note * fix eval note path * make sure first log of each instance is printed * add eval note * fix the display for visualizer * tweak visualizer for better git patch reading * exclude empty patch * add retry mechanism for swe_env_box start * fix ssh timeout issue * add stat field for apply test patch success * add visualization for fine-grained report * attempt to support monologue agent by constraining it to single thread * also log error msg when stopeed * save error as well * override WORKSPACE_MOUNT_PATH and WORKSPACE_BASE for monologue to work in mp * add retry mechanism for sshbox * remove retry for swe env box * try to handle loop state stopped * Add get report scripts * Add script to convert agent output to swe-bench format * Merge fine grained report for visualizer * Update eval readme * Update README.md * Add CodeAct gpt4-1106 output and eval logs on swe-bench-lite * Update the script to get model report * Update get_model_report.sh * Update get_agent_report.sh * Update report merge script * Add agent output conversion script * Update swe_lite_env_setup.sh * Add example swe-bench output files * Update eval readme * Remove redundant scripts * set iteration count down to false by default * fix: Issue where CodeAct agent was trying to log cost on local llm and throwing Undefined Model execption out of litellm (#1666) * fix: Issue where CodeAct agent was trying to log cost on local llm and throwing Undefined Model execption out of litellm * Review Feedback * Missing None Check * Review feedback and improved error handling --------- Co-authored-by: Robert Brennan <accounts@rbren.io> * fix prepare_swe_util scripts * update builder images * update setup script * remove swe-bench build workflow * update lock * remove experiments since they are moved to hf * remove visualizer (since it is moved to hf repo) * simply jupyter execution via heredoc * update ssh_box * add initial docker readme * add pkg-config as dependency * add script for swe_bench all-in-one docker * add rsync to builder * rename var * update commit * update readme * update lock * support specify timeout for long running tasks * fix path * separate building of all deps and files * support returning states at the end of controller * remove return None * support specify timeout for long running tasks * add timeout for all existing sandbox impl * fix swe_env_box for new codebase * update llm config in config.py * support pass sandbox in * remove force set * update eval script * fix issue of overriding final state * change default eval output to hf demo * change default eval output to hf demo * fix config * only close it when it is NOT external sandbox * add scripts * tweak config * only put in hostory when state has history attr * fix agent controller on the case of run out interaction budget * always assume state is always not none * remove print of final state * catch all exception when cannot compute completion cost * Update README.md * save source into json * fix path * update docker path * return the final state on close * merge AgentState with State * fix integration test * merge AgentState with State * fix integration test * add ChangeAgentStateAction to history in attempt to fix integration * add back set agent state * update tests * update tests * move scripts for setup * update script and readme for infer * do not reset logger when n processes == 1 * update eval_infer scripts and readme * simplify readme * copy over dir after eval * copy over dir after eval * directly return get state * update lock * fix output saving of infer * replace print with logger * update eval_infer script * add back the missing .close * increase timeout * copy all swe_bench_format file * attempt to fix output parsing * log git commit id as metadata * fix eval script * update lock * update unit tests * fix argparser unit test * fix lock * the deps are now lightweight enough to be incude in make build * add spaces for tests * add eval outputs to gitignore * remove git submodule * readme * tweak git email * update upload instruction * bump codeact version for eval --------- Co-authored-by: Bowen Li <libowen.ne@gmail.com> Co-authored-by: huybery <huybery@gmail.com> Co-authored-by: Bart Shappee <bshappee@gmail.com> Co-authored-by: Robert Brennan <accounts@rbren.io>
97 lines
3.0 KiB
Bash
Executable File
97 lines
3.0 KiB
Bash
Executable File
#!/bin/bash
|
|
|
|
set -e
|
|
|
|
# assert user name is `root`
|
|
if [ "$USER" != "root" ]; then
|
|
echo "Error: This script is intended to be run by the 'root' user only." >&2
|
|
exit 1
|
|
fi
|
|
|
|
source ~/.bashrc
|
|
|
|
SWEUTIL_DIR=/swe_util
|
|
|
|
# Create logs directory
|
|
LOG_DIR=/opendevin/logs
|
|
mkdir -p $LOG_DIR && chmod 777 $LOG_DIR
|
|
|
|
# FIXME: Cannot read SWE_INSTANCE_ID from the environment variable
|
|
# SWE_INSTANCE_ID=django__django-11099
|
|
if [ -z "$SWE_INSTANCE_ID" ]; then
|
|
echo "Error: SWE_INSTANCE_ID is not set." >&2
|
|
exit 1
|
|
fi
|
|
|
|
# Read the swe-bench-test-lite.json file and extract the required item based on instance_id
|
|
item=$(jq --arg INSTANCE_ID "$SWE_INSTANCE_ID" '.[] | select(.instance_id == $INSTANCE_ID)' $SWEUTIL_DIR/eval_data/instances/swe-bench-test-lite.json)
|
|
|
|
if [[ -z "$item" ]]; then
|
|
echo "No item found for the provided instance ID."
|
|
exit 1
|
|
fi
|
|
|
|
CONDA_ENV_NAME=$(echo "$item" | jq -r '.repo + "__" + .version | gsub("/"; "__")')
|
|
|
|
echo "CONDA_ENV_NAME: $CONDA_ENV_NAME"
|
|
|
|
SWE_TASK_DIR=/opendevin/swe_tasks
|
|
mkdir -p $SWE_TASK_DIR
|
|
# Dump test_patch to /workspace/test.patch
|
|
echo "$item" | jq -r '.test_patch' > $SWE_TASK_DIR/test.patch
|
|
# Dump patch to /workspace/gold.patch
|
|
echo "$item" | jq -r '.patch' > $SWE_TASK_DIR/gold.patch
|
|
# Dump the item to /workspace/instance.json except for the "test_patch" and "patch" fields
|
|
echo "$item" | jq 'del(.test_patch, .patch)' > $SWE_TASK_DIR/instance.json
|
|
|
|
# Clear the workspace
|
|
rm -rf /workspace/*
|
|
# Copy repo to workspace
|
|
if [ -d /workspace/$CONDA_ENV_NAME ]; then
|
|
rm -rf /workspace/$CONDA_ENV_NAME
|
|
fi
|
|
cp -r $SWEUTIL_DIR/eval_data/testbeds/$CONDA_ENV_NAME /workspace
|
|
|
|
# Reset swe-bench testbed and install the repo
|
|
. $SWEUTIL_DIR/miniforge3/etc/profile.d/conda.sh
|
|
conda config --set changeps1 False
|
|
conda config --append channels conda-forge
|
|
conda activate swe-bench-eval
|
|
|
|
mkdir -p $SWE_TASK_DIR/reset_testbed_temp
|
|
mkdir -p $SWE_TASK_DIR/reset_testbed_log_dir
|
|
SWE_BENCH_DIR=/swe_util/OD-SWE-bench
|
|
output=$(
|
|
export PYTHONPATH=$SWE_BENCH_DIR && \
|
|
cd $SWE_BENCH_DIR && \
|
|
python swebench/harness/reset_swe_env.py \
|
|
--swe_bench_tasks $SWEUTIL_DIR/eval_data/instances/swe-bench-test.json \
|
|
--temp_dir $SWE_TASK_DIR/reset_testbed_temp \
|
|
--testbed /workspace \
|
|
--conda_path $SWEUTIL_DIR/miniforge3 \
|
|
--instance_id $SWE_INSTANCE_ID \
|
|
--log_dir $SWE_TASK_DIR/reset_testbed_log_dir \
|
|
--timeout 900 \
|
|
--verbose
|
|
)
|
|
|
|
REPO_PATH=$(echo "$output" | awk -F': ' '/repo_path:/ {print $2}')
|
|
TEST_CMD=$(echo "$output" | awk -F': ' '/test_cmd:/ {print $2}')
|
|
echo "Repo Path: $REPO_PATH"
|
|
echo "Test Command: $TEST_CMD"
|
|
|
|
echo "export SWE_BENCH_DIR=\"$SWE_BENCH_DIR\"" >> ~/.bashrc
|
|
echo "export REPO_PATH=\"$REPO_PATH\"" >> ~/.bashrc
|
|
echo "export TEST_CMD=\"$TEST_CMD\"" >> ~/.bashrc
|
|
|
|
if [[ "$REPO_PATH" == "None" ]]; then
|
|
echo "Error: Failed to retrieve repository path. Tests may not have passed or output was not as expected." >&2
|
|
exit 1
|
|
fi
|
|
|
|
# Activate instance-specific environment
|
|
. $SWEUTIL_DIR/miniforge3/etc/profile.d/conda.sh
|
|
conda activate $CONDA_ENV_NAME
|
|
|
|
set +e
|