Multi swe gym (#10605)

Co-authored-by: openhands <openhands@all-hands.dev>
This commit is contained in:
juanmichelini
2025-09-22 21:56:26 +02:00
committed by GitHub
parent 818cc60b52
commit 547e1049f1
10 changed files with 1029 additions and 65 deletions

View File

@@ -0,0 +1,152 @@
<h1 align="center"> Training Software Engineering Agents and Verifiers with SWE-Gym </h1>
A Multi-SWE-bench implementation of SWE-Gym.
<p align="center">
<a href="https://www.jiayipan.com/" style="text-decoration: none;">Jiayi Pan<sup>*,1</sup></a>,
<a href="https://xwang.dev/" style="text-decoration: none;">Xingyao Wang<sup>*,2</sup></a>,
<a href="https://www.phontron.com/" style="text-decoration: none;">Graham Neubig<sup>3</sup></a>,
<a href="https://www.cs.toronto.edu/~ndjaitly/" style="text-decoration: none;">Navdeep Jaitly<sup>4</sup></a>,
<a href="https://blender.cs.illinois.edu/hengji.html" style="text-decoration: none;">Heng Ji<sup>2</sup></a>,
<a href="https://www.alanesuhr.com/" style="text-decoration: none;">Alane Suhr<sup>^,1</sup></a>,
<a href="https://dreasysnail.github.io/" style="text-decoration: none;">Yizhe Zhang<sup>^,4</sup></a>
</p>
<p align="center">
<sup>1</sup>UC Berkeley, <sup>2</sup>UIUC, <sup>3</sup>CMU, <sup>4</sup>Apple </br>
<sub><sup>*</sup>Equal contribution, <sup>^</sup>Equal supervision</sub>
</p>
<p align="center">
<a href="https://arxiv.org/abs/2412.21139">📃 Paper</a>
<a href="https://huggingface.co/SWE-Gym" >🤗 Data & Models</a>
</p>
We present **SWE-Gym**, the first environment for training real-world software engineering agents.
We use it to train strong LM agents that achieve state-of-the-art open results on SWE-Bench, with early, promising scaling characteristics as we increase training and inference-time compute.
<p align="center">
<img src="https://github.com/SWE-Gym/SWE-Gym/blob/main/assets/images/teaser.jpg?raw=true" width="100%" alt="teaser">
</p>
---
# Run SWE-Gym with OpenHands
The process of running SWE-Gym is very similar to how you'd run SWE-Bench evaluation.
1. First, clone OpenHands repo `git clone https://github.com/All-Hands-AI/OpenHands.git`
2. Then setup the repo following [Development.md](https://github.com/All-Hands-AI/OpenHands/blob/main/Development.md)
3. Then you can simply serve your own model as an OpenAI compatible endpoint, put those info in config.toml. You can do this by following instruction [here](../../README.md#setup).
4. And then simply do the following to sample for 16x parallelism:
```bash
export ALLHANDS_API_KEY=ah-yourkey # You don't need to set this when running these in local docker container
./evaluation/benchmarks/multi_swe_bench/scripts/rollout_swegym.sh llm.mymodel-temp05 'train-t05' 16
```
NOTE: SWE-Gym sampling with parallelism is currently only tested with AllHands RemoteRuntime (limited beta). Fill [this form](https://docs.google.com/forms/d/e/1FAIpQLSckVz_JFwg2_mOxNZjCtr7aoBFI2Mwdan3f75J_TrdMS1JV2g/viewform) to apply for access.
5. When `rollout_swegym.sh` finishes, you will get a file called `output.with_completions.jsonl.gz`. Then you can use [`./scripts/swegym/convert_data.ipynb`](./scripts/swegym/convert_data.ipynb) to convert them into SFT data format.
## Running the Jupyter Notebook
To run the data conversion notebook, follow these steps:
1. Navigate to the OpenHands repository root:
```bash
cd openhands_repo
```
2. Set the PYTHONPATH and start Jupyter notebook:
```bash
PYTHONPATH=$(pwd) jupyter notebook
```
3. In the Jupyter interface, navigate to `evaluation/benchmarks/swe_bench/scripts/swegym/convert_data.ipynb`
4. Update the file paths in the notebook:
- Set `FILE_PATHS` to point to your `output.with_completions.jsonl.gz` files
- Set `YOUR_OUTPUT_FOLDER` to your desired output directory
5. Run the notebook cells sequentially to process your data and generate the SFT training format.
---
# More info about SWE-Gym
Progress in agents for software engineering has been limited by the lack of training environments that both include rigorous verification for reinforcement learning and cover the expansive tasks encountered in real-world repository-level engineering.
We introduce SWE-Gym: An Open Environment for Training Software Engineering Agents & Verifiers.
Our baselines achieve new open SOTA - 32%/26% on SWE-Bench Verified/Lite, with promising scaling trends.
![SWE-Gym Scaling](https://github.com/SWE-Gym/SWE-Gym/blob/main/assets/images/scaling.jpg?raw=true)
*SWE-Gym enables scalable improvements for software engineering agents at both training and inference time. Our current results is primarily bottlenecked by training and inference compute, rather than the size of our environment.*
## SWE-Gym Environment
We create SWE-Gym, the first environment for training SWE agents, with **2.4K real tasks from 11 Python repos** & a Lite split of 234 instances. SWE-Gym combines real-world Python tasks, repository context, executable environments, and test verification to train agents for solving software engineering problems.
![SWE-Gym Repo Distribution](https://github.com/SWE-Gym/SWE-Gym/blob/main/assets/images/swe-gym.jpg?raw=true)
## SWE-Gym trains LMs as agents
When fine-tuned on less than 500 agent-environment interaction trajectories sampled from it from GPT-4o and Claude 3.5 Sonnet, we achieve **+14%** absolute gains on SWE-Bench Verified with an 32B LM-powered OpenHands agent.
![OpenHands Performance diff before and after training](https://github.com/SWE-Gym/SWE-Gym/blob/main/assets/images/oh-agent.jpg?raw=true)
## SWE-Gym enables self-improvement
SWE-Gym is also effective across agent scaffolds. With rejection sampling fine-tuning and MoatlessTools scaffold, our 32B and 7B models achieve 20% and 10% respectively on SWE-Bench Lite through self-improvement.
<p align="center">
<img src="https://github.com/SWE-Gym/SWE-Gym/blob/main/assets/images/ml-agent.jpg?raw=true" width="80%" alt="Moatless self-improvement">
</p>
## SWE-Gym enables inference-time scaling
SWE-Gym enables inference-time scaling through verifiers trained on agent trajectories.
These verifiers identify most promising solutions via best-of-n selection, together with our learned agents, they achieve 32%/26% on SWE-Bench Verified/Lite, a new open SoTA.
![Inference Time Scaling for Moatless Agent](https://github.com/SWE-Gym/SWE-Gym/blob/main/assets/images/inference-ml.jpg?raw=true)
*Inference Time Scaling for Moatless Agent*
![Inference Time Scaling for OpenHands Agent](https://github.com/SWE-Gym/SWE-Gym/blob/main/assets/images/inference-oh.jpg?raw=true)
*Inference Time Scaling for OpenHands Agent*
## Our baselines on SWE-Gym shows strong scaling trends
Lastly, our ablations reveal strong scaling trends - performance is now bottlenecked by train and inference compute, rather than the size of our dataset. Pushing and improving these scaling trends further is an exciting direction for future work.
![](https://github.com/SWE-Gym/SWE-Gym/blob/main/assets/images/scaling.jpg?raw=true)
## Reproducing Results
**The Dataset**
To access SWE-Gym dataset, checkout our huggingface hub page [SWE-Gym](https://huggingface.co/SWE-Gym)
The environment constants are currently saved at [SWE-Bench-Fork](https://github.com/SWE-Gym/SWE-Bench-Fork)
We also have pre-built docker images for each instance under [xingyaoww/sweb.eval.x86_64](https://hub.docker.com/search?q=xingyaoww%2Fsweb.eval.x86_64.) prefix at docker hub.
## 📚 Citation
```bibtex
@misc{pan2024trainingsoftwareengineeringagents,
title={Training Software Engineering Agents and Verifiers with SWE-Gym},
author={Jiayi Pan and Xingyao Wang and Graham Neubig and Navdeep Jaitly and Heng Ji and Alane Suhr and Yizhe Zhang},
year={2024},
eprint={2412.21139},
archivePrefix={arXiv},
primaryClass={cs.SE},
url={https://arxiv.org/abs/2412.21139},
}
```

View File

@@ -51,8 +51,8 @@ RUN_WITH_BROWSING = os.environ.get('RUN_WITH_BROWSING', 'false').lower() == 'tru
# TODO: migrate all swe-bench docker to ghcr.io/openhands
# TODO: 适应所有的语言
DOCKER_IMAGE_PREFIX = os.environ.get('EVAL_DOCKER_IMAGE_PREFIX', '')
LANGUAGE = os.environ.get('LANGUAGE', 'python')
DOCKER_IMAGE_PREFIX = os.environ.get('EVAL_DOCKER_IMAGE_PREFIX', 'mswebench')
LANGUAGE = os.environ.get('LANGUAGE', 'java')
logger.info(f'Using docker image prefix: {DOCKER_IMAGE_PREFIX}')
@@ -305,31 +305,19 @@ def get_instance_docker_image(instance: pd.Series):
instance_id = instance.get('instance_id', '')
tag_suffix = instance_id.split('-')[-1] if instance_id else ''
container_tag = f'pr-{tag_suffix}'
# pdb.set_trace()
return f'mswebench/{container_name}:{container_tag}'
# return "kong/insomnia:pr-8284"
# return "'sweb.eval.x86_64.local_insomnia"
# return "local_insomnia_why"
# return "local/kong-insomnia:pr-8117"
return f'{DOCKER_IMAGE_PREFIX}/{container_name}:{container_tag}'
def get_config(
instance: pd.Series,
metadata: EvalMetadata,
) -> OpenHandsConfig:
SWE_BENCH_CONTAINER_IMAGE = 'ghcr.io/opendevin/eval-swe-bench:full-v1.2.1'
if USE_INSTANCE_IMAGE:
# We use a different instance image for the each instance of swe-bench eval
# base_container_image = get_instance_docker_image(instance['instance_id'])
base_container_image = get_instance_docker_image(instance)
logger.info(
f'Using instance container image: {base_container_image}. '
f'Please make sure this image exists. '
f'Submit an issue on https://github.com/All-Hands-AI/OpenHands if you run into any issues.'
)
else:
base_container_image = SWE_BENCH_CONTAINER_IMAGE
logger.info(f'Using swe-bench container image: {base_container_image}')
base_container_image = get_instance_docker_image(instance)
logger.info(
f'Using instance container image: {base_container_image}. '
f'Please make sure this image exists. '
f'Submit an issue on https://github.com/All-Hands-AI/OpenHands if you run into any issues.'
)
sandbox_config = get_default_sandbox_config_for_eval()
sandbox_config.base_container_image = base_container_image
@@ -772,7 +760,6 @@ if __name__ == '__main__':
parser.add_argument(
'--dataset',
type=str,
default='princeton-nlp/SWE-bench',
help='data set to evaluate on, either full-test or lite-test',
)
parser.add_argument(
@@ -787,6 +774,7 @@ if __name__ == '__main__':
# so we don't need to manage file uploading to OpenHands's repo
# dataset = load_dataset(args.dataset, split=args.split)
# dataset = load_dataset(args.dataset)
logger.info(f'Loading dataset {args.dataset} with split {args.split} ')
dataset = load_dataset('json', data_files=args.dataset)
dataset = dataset[args.split]
swe_bench_tests = filter_dataset(dataset.to_pandas(), 'instance_id')
@@ -839,7 +827,7 @@ if __name__ == '__main__':
args.eval_num_workers,
process_instance,
timeout_seconds=120 * 60, # 2 hour PER instance should be more than enough
max_retries=5,
max_retries=3,
)
# Check if any instances reached maximum retries
check_maximum_retries_exceeded(metadata.eval_output_dir)

View File

@@ -1,37 +1,54 @@
import argparse
import json
input_file = 'XXX.jsonl'
output_file = 'YYY.jsonl'
with (
open(input_file, 'r', encoding='utf-8') as fin,
open(output_file, 'w', encoding='utf-8') as fout,
):
for line in fin:
line = line.strip()
if not line:
continue
def main(input_file, output_file):
with (
open(input_file, 'r', encoding='utf-8') as fin,
open(output_file, 'w', encoding='utf-8') as fout,
):
for line in fin:
line = line.strip()
if not line:
continue
data = json.loads(line)
item = data
data = json.loads(line)
item = data
# 提取原始数据
org = item.get('org', '')
repo = item.get('repo', '')
number = str(item.get('number', ''))
# Skip instances that don't have resolved_issues or have empty resolved_issues
if not item.get('resolved_issues') or len(item['resolved_issues']) == 0:
print(
f'Skipping instance {item.get("org", "")}/{item.get("repo", "")}-{item.get("number", "")} - no resolved_issues'
)
continue
new_item = {}
new_item['repo'] = f'{org}/{repo}'
new_item['instance_id'] = f'{org}__{repo}-{number}'
new_item['problem_statement'] = (
item['resolved_issues'][0].get('title', '')
+ '\n'
+ item['resolved_issues'][0].get('body', '')
)
new_item['FAIL_TO_PASS'] = []
new_item['PASS_TO_PASS'] = []
new_item['base_commit'] = item['base'].get('sha', '')
new_item['version'] = '0.1' # depends
# 提取原始数据
org = item.get('org', '')
repo = item.get('repo', '')
number = str(item.get('number', ''))
output_data = new_item
fout.write(json.dumps(output_data, ensure_ascii=False) + '\n')
new_item = {}
new_item['repo'] = f'{org}/{repo}'
new_item['instance_id'] = f'{org}__{repo}-{number}'
# Get the first resolved issue
resolved_issue = item['resolved_issues'][0]
title = resolved_issue.get('title') or ''
body = resolved_issue.get('body') or ''
new_item['problem_statement'] = title + '\n' + body
new_item['FAIL_TO_PASS'] = []
new_item['PASS_TO_PASS'] = []
new_item['base_commit'] = item['base'].get('sha', '')
new_item['version'] = '0.1' # depends
output_data = new_item
fout.write(json.dumps(output_data, ensure_ascii=False) + '\n')
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--input', required=True, help='Input .jsonl file path')
parser.add_argument('--output', required=True, help='Output .jsonl file path')
args = parser.parse_args()
main(args.input, args.output)

View File

@@ -0,0 +1,69 @@
import argparse
import gzip
import json
import os
from glob import glob
from tqdm import tqdm
tqdm.pandas()
# Load trajectories for resolved instances
def load_completions(output_dir: str, instance_id: str):
glob_path = os.path.join(output_dir, 'llm_completions', instance_id, '*.json')
files = sorted(glob(glob_path)) # this is ascending order
# pick the last file (last turn)
try:
file_path = files[-1]
except IndexError:
# print(f'No files found for instance {instance_id}: files={files}')
return None
with open(file_path, 'r') as f:
result = json.load(f)
# create messages
messages = result['messages']
messages.append(result['response']['choices'][0]['message'])
tools = result['kwargs'].get('tools', [])
return {
'messages': messages,
'tools': tools,
}
parser = argparse.ArgumentParser()
parser.add_argument('jsonl_path', type=str)
args = parser.parse_args()
output_dir = os.path.dirname(args.jsonl_path)
output_path = os.path.join(output_dir, 'output.with_completions.jsonl.gz')
# Check if output would be different from input
needs_update = False
with open(args.jsonl_path, 'r') as f_in:
for line in tqdm(f_in, desc='Checking for changes'):
data = json.loads(line)
new_completions = load_completions(output_dir, data['instance_id'])
current_completions = data.get('raw_completions')
if current_completions != new_completions:
needs_update = True
break
if not needs_update:
print('No updates required. Skipping file update.')
exit(0)
if os.path.exists(output_path):
print(f'Output file already exists at {output_path}, overwriting? (y/n)')
if input() != 'y':
print('Exiting...')
exit(0)
# Process line by line
with open(args.jsonl_path, 'r') as f_in, gzip.open(output_path, 'wt') as f_out:
for line in tqdm(f_in):
data = json.loads(line)
data['raw_completions'] = load_completions(output_dir, data['instance_id'])
f_out.write(json.dumps(data) + '\n')
print(f'Saved compressed output to {output_path}')

View File

@@ -1,13 +1,11 @@
import argparse
import json
import re
IN_FILE = 'output.jsonl'
OUT_FILE = 'patch.jsonl'
def main():
with open(IN_FILE, 'r') as fin:
with open(OUT_FILE, 'w') as fout:
def main(input_file, output_file):
with open(input_file, 'r') as fin:
with open(output_file, 'w') as fout:
for line in fin:
data = json.loads(line)
groups = re.match(r'(.*)__(.*)-(.*)', data['instance_id'])
@@ -15,10 +13,14 @@ def main():
'org': groups.group(1),
'repo': groups.group(2),
'number': groups.group(3),
'fix_patch': data['test_result']['git_patch'],
'fix_patch': data.get('test_result', {}).get('git_patch', '') or '',
}
fout.write(json.dumps(patch) + '\n')
if __name__ == '__main__':
main()
parser = argparse.ArgumentParser()
parser.add_argument('--input', required=True, help='Input .jsonl file path')
parser.add_argument('--output', required=True, help='Output .jsonl file path')
args = parser.parse_args()
main(args.input, args.output)

View File

@@ -0,0 +1,70 @@
import argparse
import json
import os
import subprocess
def update_multi_swe_config(output_jsonl_path, config_path, dataset):
path_to_parent = os.path.dirname(os.path.abspath(output_jsonl_path))
converted_path = os.path.join(path_to_parent, 'output_converted.jsonl')
# Run the conversion script
subprocess.run(
[
'python3',
'./evaluation/benchmarks/multi_swe_bench/scripts/eval/convert.py',
'--input',
output_jsonl_path,
'--output',
converted_path,
],
check=True,
)
# Create required directories
os.makedirs(os.path.join(path_to_parent, 'eval_files', 'dataset'), exist_ok=True)
os.makedirs(os.path.join(path_to_parent, 'eval_files', 'workdir'), exist_ok=True)
os.makedirs(os.path.join(path_to_parent, 'eval_files', 'repos'), exist_ok=True)
os.makedirs(os.path.join(path_to_parent, 'eval_files', 'logs'), exist_ok=True)
# Prepare config dict
config = {
'mode': 'evaluation',
'workdir': os.path.join(path_to_parent, 'eval_files', 'workdir'),
'patch_files': [converted_path],
'dataset_files': [dataset],
'force_build': True,
'output_dir': os.path.join(path_to_parent, 'eval_files', 'dataset'),
'specifics': [],
'skips': [],
'repo_dir': os.path.join(path_to_parent, 'eval_files', 'repos'),
'need_clone': True,
'global_env': [],
'clear_env': True,
'stop_on_error': False,
'max_workers': 5,
'max_workers_build_image': 5,
'max_workers_run_instance': 5,
'log_dir': os.path.join(path_to_parent, 'eval_files', 'logs'),
'log_level': 'DEBUG',
'fix_patch_run_cmd': (
'bash -c "apt update ; apt install -y patch ; '
"sed -i 's@git apply.*@patch --batch --fuzz=5 -p1 -i /home/test.patch;"
'patch --batch --fuzz=5 -p1 -i /home/fix.patch@g\' /home/fix-run.sh ; chmod +x /home/*.sh ; /home/fix-run.sh"'
),
}
# Save to multibench.config
os.makedirs(os.path.dirname(config_path), exist_ok=True)
with open(config_path, 'w') as f:
json.dump(config, f, indent=4)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--input', required=True, help='Path to input file')
parser.add_argument('--output', required=True, help='Path to create config')
parser.add_argument('--dataset', required=True, help='Path to dataset')
args = parser.parse_args()
update_multi_swe_config(args.input, args.output, args.dataset)

View File

@@ -0,0 +1,176 @@
import argparse
import json
import os
from collections import defaultdict
from tqdm import tqdm
parser = argparse.ArgumentParser()
parser.add_argument('input_file', type=str)
parser.add_argument(
'--force',
action='store_true',
help='Force update all reports even if no changes are detected',
)
parser.add_argument(
'--overwrite-backup',
action='store_true',
help='Automatically overwrite existing backup files without prompting',
)
args = parser.parse_args()
dirname = os.path.dirname(args.input_file)
# Initialize counters and data structures
instance_id_to_status = defaultdict(
lambda: {
'empty_generation': False,
'resolved': False,
'failed_apply_patch': False,
'error_eval': False,
'test_timeout': False,
}
)
# Process official report if it exists
swebench_official_report_json = os.path.join(
dirname, 'eval_files/dataset/final_report.json'
)
openhands_remote_report_jsonl = args.input_file.replace(
'.jsonl', '.swebench_eval.jsonl'
)
if os.path.exists(swebench_official_report_json):
output_md_filepath = os.path.join(dirname, 'README.md')
with open(swebench_official_report_json, 'r') as f:
report = json.load(f)
# Convert instance IDs from "repo/name:pr-123" format to "repo__name-123" format
def convert_instance_id(instance_id):
"""Convert instance ID from slash/colon-pr format to double underscore/dash format."""
if '/' in instance_id and ':pr-' in instance_id:
# Split on '/' and ':pr-'
parts = instance_id.split('/')
if len(parts) == 2:
repo_part = parts[0]
name_and_pr = parts[1]
if ':pr-' in name_and_pr:
name, pr_number = name_and_pr.split(':pr-')
return f'{repo_part}__{name}-{pr_number}'
return instance_id
# Convert all instance ID lists in the report
for key in [
'resolved_ids',
'unresolved_ids',
'error_ids',
'empty_patch_ids',
'incomplete_ids',
]:
if key in report:
report[key] = [
convert_instance_id(instance_id) for instance_id in report[key]
]
output_md = (
'# Multi-SWE-bench Report\n'
'This folder contains the evaluation results of the SWE-bench using the [official evaluation docker containerization](https://github.com/princeton-nlp/SWE-bench/blob/main/docs/20240627_docker/README.md#choosing-the-right-cache_level).\n\n'
'## Summary\n'
f'- total instances: {report["total_instances"]}\n'
f'- submitted instances: {report["submitted_instances"]}\n'
f'- completed instances: {report["completed_instances"]}\n'
f'- empty patch instances: {report["empty_patch_instances"]}\n'
f'- resolved instances: {report["resolved_instances"]}\n'
f'- unresolved instances: {report["unresolved_instances"]}\n'
f'- error instances: {report["error_instances"]}\n'
)
output_md += '\n## Resolved Instances\n'
# instance_id to status
for instance_id in report['resolved_ids']:
instance_id_to_status[instance_id]['resolved'] = True
output_md += (
f'- [{instance_id}](./eval_outputs/{instance_id}/run_instance.log)\n'
)
output_md += '\n## Unresolved Instances\n'
for instance_id in report['unresolved_ids']:
output_md += (
f'- [{instance_id}](./eval_outputs/{instance_id}/run_instance.log)\n'
)
output_md += '\n## Error Instances\n'
for instance_id in report['error_ids']:
instance_id_to_status[instance_id]['error_eval'] = True
output_md += (
f'- [{instance_id}](./eval_outputs/{instance_id}/run_instance.log)\n'
)
output_md += '\n## Empty Patch Instances\n'
for instance_id in report['empty_patch_ids']:
instance_id_to_status[instance_id]['empty_generation'] = True
output_md += (
f'- [{instance_id}](./eval_outputs/{instance_id}/run_instance.log)\n'
)
output_md += '\n## Incomplete Instances\n'
for instance_id in report['incomplete_ids']:
output_md += (
f'- [{instance_id}](./eval_outputs/{instance_id}/run_instance.log)\n'
)
with open(output_md_filepath, 'w') as f:
f.write(output_md)
else:
print(
f'No report file found: Both {swebench_official_report_json} and {openhands_remote_report_jsonl} do not exist.'
)
exit()
# Before backup and update, check if any changes would be made (unless --force is used)
if not args.force:
needs_update = False
with open(args.input_file, 'r') as infile:
for line in tqdm(infile, desc='Checking for changes'):
data = json.loads(line)
instance_id = data['instance_id']
current_report = data.get('report', {})
new_report = instance_id_to_status[
instance_id
] # if no report, it's not resolved
if current_report != new_report:
needs_update = True
break
if not needs_update:
print('No updates detected. Skipping file update.')
exit()
else:
print('Force flag enabled. Updating all reports regardless of changes.')
# Backup and update the original file row by row
if os.path.exists(args.input_file + '.bak'):
if args.overwrite_backup:
print(
'Existing backup file found. Overwriting automatically due to --overwrite-backup flag.'
)
os.remove(args.input_file + '.bak')
else:
conf = input('Existing backup file found. Do you want to overwrite it? (y/n)')
if conf != 'y':
exit()
os.remove(args.input_file + '.bak')
os.rename(args.input_file, args.input_file + '.bak')
# Process and write file row by row
with (
open(args.input_file + '.bak', 'r') as infile,
open(args.input_file, 'w') as outfile,
):
for line in tqdm(infile, desc='Updating output file'):
data = json.loads(line)
instance_id = data['instance_id']
data['report'] = instance_id_to_status[instance_id]
outfile.write(json.dumps(data) + '\n')

View File

@@ -0,0 +1,146 @@
#!/bin/bash
# NOTE: this script is for rolling out the Multi-SWE-Gym dataset for **TRAINING**
# For more information, please refer to
# 1. the Github Repo: https://github.com/SWE-Gym/SWE-Gym
# 2. the paper: https://arxiv.org/abs/2412.21139
MODEL=$1 # eg your llm config name in config.toml (eg: "llm.claude-3-5-sonnet-20241022-t05")
EXP_NAME=$2 # "train-t05"
EVAL_DATASET=$3 # path to original dataset (jsonl file)
N_WORKERS=${4:-64}
N_RUNS=${5:-1}
export EXP_NAME=$EXP_NAME
# use 2x resources for rollout since some codebases are pretty resource-intensive
export DEFAULT_RUNTIME_RESOURCE_FACTOR=2
echo "MODEL: $MODEL"
echo "EXP_NAME: $EXP_NAME"
echo "EVAL_DATASET: $EVAL_DATASET"
# Generate DATASET path by adding _with_runtime_ before .jsonl extension
DATASET="${EVAL_DATASET%.jsonl}_with_runtime_.jsonl" # path to converted dataset
# Create the converted dataset file
echo "Creating converted dataset at: $DATASET"
poetry run python ./evaluation/benchmarks/multi_swe_bench/scripts/data/data_change.py --input "$EVAL_DATASET" --output "$DATASET"
SPLIT="train"
export LANGUAGE=java
if [ -z "$ALLHANDS_API_KEY" ] || [ "$RUNTIME" != "remote" ]; then
echo "ALLHANDS_API_KEY is not set or RUNTIME is not set to remote. Will rollout and evaluate locally using Docker. WARNING: A large value of N_WORKERS will result in a large number of Docker containers being spun up and may crash your machine."
export RUNTIME=docker
else
echo "ALLHANDS_API_KEY is set and RUNTIME is set to remote. Continuing rollout and evaluation with remote runtime..."
export SANDBOX_REMOTE_RUNTIME_API_URL="https://runtime.eval.all-hands.dev"
fi
#EVAL_LIMIT=3000
MAX_ITER=100
# ===== Run inference =====
source "evaluation/utils/version_control.sh"
get_openhands_version
echo "OPENHANDS_VERSION: $OPENHANDS_VERSION"
echo "MODEL_CONFIG: $MODEL_CONFIG"
echo "DATASET: $DATASET"
echo "EVAL_DOCKER_IMAGE_PREFIX: $EVAL_DOCKER_IMAGE_PREFIX"
# Default to NOT use Hint
export USE_INSTANCE_IMAGE=true
export USE_HINT_TEXT=false
export RUN_WITH_BROWSING=false
echo "USE_HINT_TEXT: $USE_HINT_TEXT"
EVAL_NOTE="$OPENHANDS_VERSION-no-hint-$EXP_NAME"
function run_eval() {
local eval_note=$1
export LANGUAGE=java
echo "About to run command"
COMMAND="EVAL_DOCKER_IMAGE_PREFIX=$EVAL_DOCKER_IMAGE_PREFIX; LANGUAGE=java;
poetry run python evaluation/benchmarks/multi_swe_bench/run_infer.py \
--agent-cls CodeActAgent \
--llm-config $MODEL \
--max-iterations $MAX_ITER \
--eval-num-workers $N_WORKERS \
--eval-note $eval_note \
--dataset $DATASET \
--split $SPLIT"
echo "Running command: $COMMAND"
if [ -n "$EVAL_LIMIT" ]; then
echo "EVAL_LIMIT: $EVAL_LIMIT"
COMMAND="$COMMAND --eval-n-limit $EVAL_LIMIT"
fi
# Run the command
eval $COMMAND
}
for run_idx in $(seq 1 $N_RUNS); do
while true; do
echo "### Running inference... ###"
unset SANDBOX_ENV_GITHUB_TOKEN # prevent the agent from using the github token to push
current_eval_note="$EVAL_NOTE-run_$run_idx"
echo "EVAL_NOTE: $current_eval_note"
echo "DATASET command: $DATASET"
#INFER_OUTPUT=$(run_eval $current_eval_note)
INFER_OUTPUT=$(run_eval $current_eval_note | tee /dev/stderr)
INFER_STATUS=$? # Capture the exit status of run_infer.sh
echo "INFER_STATUS: $INFER_STATUS"
echo "### Cleaning up remote runtime... ###"
./evaluation/utils/scripts/cleanup_remote_runtime.sh
if [ $INFER_STATUS -eq 0 ]; then
echo "### Inference completed successfully. ###"
break
else
echo "### Inference failed with exit code $INFER_STATUS. Retrying... ###"
fi
done
# Extract the output directory using the special delimiters
OUTPUT_FILE=$(echo "$INFER_OUTPUT" | grep -o '### OUTPUT FILE:.* ###' | sed 's/### OUTPUT FILE: \(.*\) ###/\1/')
echo "Got OUTPUT_FILE: $OUTPUT_FILE"
while true; do
echo "### Evaluating on $OUTPUT_FILE ... ###"
OUTPUT_CONFIG_FILE="${OUTPUT_FILE%.jsonl}_config.json"
export EVAL_SKIP_BUILD_ERRORS=true
pip install multi-swe-bench --quiet --disable-pip-version-check > /dev/null 2>&1
COMMAND="poetry run python ./evaluation/benchmarks/multi_swe_bench/scripts/eval/update_multi_swe_bench_config.py --input $OUTPUT_FILE --output $OUTPUT_CONFIG_FILE --dataset $EVAL_DATASET;
python -m multi_swe_bench.harness.run_evaluation --config $OUTPUT_CONFIG_FILE
"
if [ -n "$EVAL_LIMIT" ]; then
echo "EVAL_LIMIT: $EVAL_LIMIT"
COMMAND="$COMMAND --eval-n-limit $EVAL_LIMIT"
fi
echo "Running command: $COMMAND"
# Run the command
eval $COMMAND
EVAL_STATUS=$?
if [ $EVAL_STATUS -eq 0 ]; then
echo "### Evaluation completed successfully. ###"
break
else
echo "### Evaluation failed with exit code $EVAL_STATUS. Retrying... ###"
fi
./evaluation/utils/scripts/cleanup_remote_runtime.sh
done
# update the output with evaluation results
echo "### Updating the output with evaluation results... ###"
poetry run python evaluation/benchmarks/multi_swe_bench/scripts/eval/update_output_with_eval.py $OUTPUT_FILE
echo "### Combining the final completions... ###"
poetry run python evaluation/benchmarks/multi_swe_bench/scripts/eval/combine_final_completions.py $OUTPUT_FILE
echo "### DONE for run $run_idx! ###"
echo "You can find the final output at $(dirname $OUTPUT_FILE)/$FINAL_OUTPUT_FILE"
done

View File

@@ -47,8 +47,8 @@ if [ -z "$DATASET" ]; then
fi
if [ -z "$LANGUAGE" ]; then
echo "LANUGUAGE not specified, use default python"
LANGUAGE="python"
echo "LANGUAGE not specified, use default python"
LANGUAGE="java"
fi
if [ -z "$SPLIT" ]; then
@@ -69,10 +69,10 @@ fi
if [ -z "$EVAL_DOCKER_IMAGE_PREFIX" ]; then
if [ "$LANGUAGE" = "python" ]; then
echo "EVAL_DOCKER_IMAGE_PREFIX is docker.io/xingyaoww/ as default as LANUGUAGE is python"
echo "EVAL_DOCKER_IMAGE_PREFIX is docker.io/xingyaoww/ as default as LANGUAGE is python"
EVAL_DOCKER_IMAGE_PREFIX="docker.io/xingyaoww/"
elif [ "$LANGUAGE" = "java" ]; then
echo "EVAL_DOCKER_IMAGE_PREFIX is java_verified as LANUGUAGE is java"
echo "EVAL_DOCKER_IMAGE_PREFIX is empty as LANGUAGE is java"
EVAL_DOCKER_IMAGE_PREFIX=""
fi
fi

View File

@@ -0,0 +1,344 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"import pandas as pd\n",
"from tqdm import tqdm\n",
"\n",
"tqdm.pandas()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 1. Load raw data and convert to training data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import gzip\n",
"import json\n",
"\n",
"from tqdm import tqdm\n",
"\n",
"FILE_PATHS = [\n",
" 'YOURPATH-no-hint-train-t05-run_1/output.with_completions.jsonl.gz',\n",
" 'YOURPATH-no-hint-train-t05-run_2/output.with_completions.jsonl.gz',\n",
"]\n",
"\n",
"# More memory efficient for large files\n",
"# Initialize lists to store the data\n",
"data = []\n",
"\n",
"\n",
"# Read file line by line\n",
"for FILE_PATH in FILE_PATHS:\n",
" with gzip.open(FILE_PATH, 'rb') as f: # Use 'rb' for gzipped files\n",
" for i, line in tqdm(\n",
" enumerate(f), desc=f'Processing {FILE_PATH.split(\"/\")[-1]}'\n",
" ):\n",
" # Parse only the fields we need\n",
" raw_data = json.loads(line)\n",
" data.append(\n",
" {\n",
" 'resolved': raw_data['report']['resolved'],\n",
" 'messages': raw_data['raw_completions']['messages']\n",
" if raw_data['raw_completions'] is not None\n",
" else None,\n",
" 'git_patch': raw_data['test_result'].get('git_patch', ''),\n",
" 'tools': raw_data['raw_completions']['tools']\n",
" if raw_data['raw_completions'] is not None\n",
" and 'tools' in raw_data['raw_completions']\n",
" else None,\n",
" }\n",
" )\n",
"\n",
"# Convert to DataFrame after collecting all data\n",
"df = pd.DataFrame(data)\n",
"print(f'#total amount of data={len(df)}')\n",
"df = df[~df['messages'].isna()]\n",
"print(f'#total amount of data after removing nan={len(df)}')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Filter"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def _contains_multiple_tool_calls(messages: list[dict]) -> bool:\n",
" return any(\n",
" message.get('tool_calls') and len(message['tool_calls']) > 1\n",
" for message in messages\n",
" )\n",
"\n",
"\n",
"df['contains_multiple_tool_calls'] = df['messages'].apply(_contains_multiple_tool_calls)\n",
"display(df.groupby(['contains_multiple_tool_calls'])['resolved'].sum())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"import copy\n",
"\n",
"# Convert function calling messages to non-function calling messages\n",
"from openhands.llm.fn_call_converter import (\n",
" FunctionCallConversionError,\n",
" convert_fncall_messages_to_non_fncall_messages,\n",
" convert_from_multiple_tool_calls_to_single_tool_call_messages,\n",
")\n",
"\n",
"total_failed = 0\n",
"\n",
"\n",
"def _convert_messages(messages: list[dict], tools: list[dict]) -> list[dict]:\n",
" global total_failed\n",
" message_copy = copy.deepcopy(messages)\n",
" for message in message_copy:\n",
" if message['content'] is None:\n",
" message['content'] = ''\n",
" try:\n",
" return convert_fncall_messages_to_non_fncall_messages(\n",
" message_copy, tools, add_in_context_learning_example=False\n",
" )\n",
" except FunctionCallConversionError:\n",
" total_failed += 1\n",
" # print(f'Failed to convert messages: {messages}\\nTools: {tools}')\n",
" # traceback.print_exc()\n",
" return None\n",
"\n",
"\n",
"df['converted_messages'] = df.apply(\n",
" lambda row: convert_from_multiple_tool_calls_to_single_tool_call_messages(\n",
" row['messages'], ignore_final_tool_result=True\n",
" ),\n",
" axis=1,\n",
")\n",
"df['nonfncall_messages'] = df.apply(\n",
" lambda row: _convert_messages(row['converted_messages'], row['tools']), axis=1\n",
")\n",
"print('total nan', df['nonfncall_messages'].isna().sum())\n",
"df = df[~df['nonfncall_messages'].isna()]\n",
"print(df['nonfncall_messages'].iloc[0])\n",
"\n",
"print(f'Total failed: {total_failed}')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tokenization"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from pandarallel import pandarallel\n",
"from transformers import AutoTokenizer\n",
"\n",
"os.environ['TOKENIZERS_PARALLELISM'] = 'false'\n",
"pandarallel.initialize(progress_bar=True, verbose=1, nb_workers=16)\n",
"tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen2.5-7B-Instruct')\n",
"\n",
"\n",
"def clean_messages(messages):\n",
" clean = []\n",
" for msg in messages:\n",
" if not isinstance(msg, dict):\n",
" continue\n",
" role = msg.get('role')\n",
" content = msg.get('content')\n",
" if isinstance(content, str):\n",
" text = content\n",
" elif isinstance(content, dict):\n",
" text = content.get('text')\n",
" elif (\n",
" isinstance(content, list)\n",
" and len(content) == 1\n",
" and isinstance(content[0], dict)\n",
" ):\n",
" text = content[0].get('text')\n",
" else:\n",
" print(f'Format not accepted {content}')\n",
" clean.append({'role': role, 'content': text})\n",
" return clean\n",
"\n",
"\n",
"# Step 1: Clean the messages\n",
"df['nonfncall_messages'] = df['nonfncall_messages'].apply(clean_messages)\n",
"\n",
"# Step 2: Compute token count\n",
"df['n_tokens'] = df['nonfncall_messages'].parallel_apply(\n",
" lambda x: len(tokenizer.apply_chat_template(x))\n",
")\n",
"\n",
"# print(df['nonfncall_messages'])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(f'BEFORE: #total={len(df)}')\n",
"df_selected = df[df['n_tokens'] < 131072]\n",
"print(f'AFTER(truncated to 128k): #total={len(df_selected)}')"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_selected['n_tokens'].describe()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# ecdf of n_tokens\n",
"import matplotlib.pyplot as plt\n",
"import seaborn as sns\n",
"\n",
"display(df.groupby(['resolved'])['n_tokens'].describe())\n",
"sns.ecdfplot(x='n_tokens', data=df, hue='resolved')\n",
"plt.show()\n",
"\n",
"print(f'#total={len(df)}')\n",
"df_selected = df[df['n_tokens'] < 131072]\n",
"print(f'#selected={len(df_selected)}')\n",
"display(df_selected.groupby(['resolved'])['n_tokens'].describe())\n",
"sns.ecdfplot(x='n_tokens', data=df_selected, hue='resolved')\n",
"plt.show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_selected[~df_selected['resolved']]['n_tokens'].describe()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_selected['resolved'].value_counts()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"df_selected.groupby(['resolved'])['n_tokens'].describe()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Save Resolved Messages for SFT"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Flatten messages and change format to {\"content\": \"\", \"role\": \"\"}\n",
"df_selected[df_selected['resolved']][['nonfncall_messages']].rename(\n",
" columns={'nonfncall_messages': 'messages'}\n",
").to_json(\n",
" os.path.join(\n",
" 'PATH_TO_FILE',\n",
" f'policy_traj_128k_swegym_{df_selected[\"resolved\"].value_counts()[True]}i.jsonl',\n",
" ),\n",
" lines=True,\n",
" orient='records',\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.11"
}
},
"nbformat": 4,
"nbformat_minor": 4
}