|
--- |
|
license: cc-by-4.0 |
|
task_categories: |
|
- other |
|
library_name: datasets |
|
dataset_info: |
|
features: |
|
- name: instance_id |
|
dtype: string |
|
- name: base_commit |
|
dtype: string |
|
- name: created_at |
|
dtype: string |
|
- name: environment_setup_commit |
|
dtype: string |
|
- name: hints_text |
|
dtype: string |
|
- name: patch |
|
dtype: string |
|
- name: problem_statement |
|
dtype: string |
|
- name: repo |
|
dtype: string |
|
- name: test_patch |
|
dtype: string |
|
- name: meta |
|
struct: |
|
- name: commit_name |
|
dtype: string |
|
- name: failed_lite_validators |
|
sequence: string |
|
- name: has_test_patch |
|
dtype: bool |
|
- name: is_lite |
|
dtype: bool |
|
- name: llm_score |
|
struct: |
|
- name: difficulty_score |
|
dtype: int64 |
|
- name: issue_text_score |
|
dtype: int64 |
|
- name: test_score |
|
dtype: int64 |
|
- name: num_modified_files |
|
dtype: int64 |
|
- name: version |
|
dtype: string |
|
- name: install_config |
|
struct: |
|
- name: env_vars |
|
struct: |
|
- name: JUPYTER_PLATFORM_DIRS |
|
dtype: string |
|
- name: env_yml_path |
|
sequence: string |
|
- name: install |
|
dtype: string |
|
- name: log_parser |
|
dtype: string |
|
- name: no_use_env |
|
dtype: bool |
|
- name: packages |
|
dtype: string |
|
- name: pip_packages |
|
sequence: string |
|
- name: pre_install |
|
sequence: string |
|
- name: python |
|
dtype: string |
|
- name: reqs_path |
|
sequence: string |
|
- name: test_cmd |
|
dtype: string |
|
- name: requirements |
|
dtype: string |
|
- name: environment |
|
dtype: string |
|
- name: FAIL_TO_PASS |
|
sequence: string |
|
- name: FAIL_TO_FAIL |
|
sequence: string |
|
- name: PASS_TO_PASS |
|
sequence: string |
|
- name: PASS_TO_FAIL |
|
sequence: string |
|
- name: license_name |
|
dtype: string |
|
- name: __index_level_0__ |
|
dtype: int64 |
|
splits: |
|
- name: test |
|
num_bytes: 737537372 |
|
num_examples: 21336 |
|
download_size: 239735457 |
|
dataset_size: 737537372 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: test |
|
path: data/test-* |
|
--- |
|
|
|
# Dataset Summary |
|
|
|
SWE-rebench is a large-scale dataset designed to support training and evaluation of LLM-based software engineering (SWE) agents, building upon and expanding our earlier release, [SWE-bench-extra](https://huggingface.co/datasets/nebius/SWE-bench-extra). It is constructed using a fully automated pipeline that continuously extracts real-world interactive SWE tasks from GitHub repositories at scale, as detailed in our paper [SWE-rebench: An Automated Pipeline for Task Collection and Decontaminated Evaluation of Software Engineering Agents](https://arxiv.org/abs/2505.20411). The dataset currently comprises over 21,000 issue–pull request pairs from 3,400+ Python repositories, each validated for correctness through automated environment setup and test execution. A curated subset of these tasks also forms the basis of our continuously updated [SWE-rebench leaderboard](https://swe-rebench.com/leaderboard). |
|
SWE-rebench builds upon and extends the methodology of [SWE-bench](https://www.swebench.com/) by incorporating several key enhancements detailed in our paper, including: |
|
|
|
* A fully automated pipeline for continuous task collection. |
|
* LLM-driven extraction and validation of environment installation instructions. |
|
* An automated LLM-based task quality assessment pipeline that annotates tasks with labels such as clarity, complexity, or test patch validity. |
|
|
|
We’ve released 7,500 pre-built Docker images used in our RL pipeline. They’re publicly available on [Docker Hub](https://hub.docker.com/repositories/swerebench). You do not need to build them yourself. |
|
|
|
# News |
|
|
|
[2025/08/05] Uploaded the corresponding Docker images for 7,500 tasks to Docker Hub. |
|
|
|
# How to Use |
|
|
|
```python |
|
from datasets import load_dataset |
|
ds = load_dataset('nebius/SWE-rebench') |
|
``` |
|
|
|
# Dataset Structure |
|
The SWE-rebench dataset schema extends the original SWE-bench schema with additional fields to support richer analysis. The complete schema is detailed in the table below. For more information about this data and methodology behind collecting it, please refer to our paper. |
|
|
|
| Field name | Type | Description | |
|
|----------------------------|--------|-------------------------------------------------------------------------------------------------| |
|
| `instance_id` | str | A formatted instance identifier, usually as `repo_owner__repo_name-PR-number`. | |
|
| `patch` | str | The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue. | |
|
| `repo` | str | The repository owner/name identifier from GitHub. | |
|
| `base_commit` | str | The commit hash of the repository representing the HEAD of the repository before the solution PR is applied. | |
|
| `hints_text` | str | Comments made on the issue prior to the creation of the solution PR’s first commit creation date. | |
|
| `created_at` | str | The creation date of the pull request. | |
|
| `test_patch` | str | A test-file patch that was contributed by the solution PR. | |
|
| `problem_statement` | str | The issue title and body. | |
|
| `version` | str | Installation version to use for running evaluation. | |
|
| `environment_setup_commit` | str | Commit hash to use for environment setup and installation. | |
|
| `FAIL_TO_PASS` | str | A JSON list of strings that represent the set of tests resolved by the PR and tied to the issue resolution. | |
|
| `PASS_TO_PASS` | str | A JSON list of strings that represent tests that should pass before and after the PR application. | |
|
| `meta` | str | A JSON dictionary indicating whether the instance is lite, along with a list of failed lite validators if it is not. | |
|
| `license_name` | str | The type of license of the repository. | |
|
| `install_config` | str | Installation configuration for setting up the repository. | |
|
| `requirements` | str | Freezed requirements for the repository. | |
|
| `environment` | str | Environment configuration for the repository. | |
|
|
|
To execute tasks from SWE-rebench (i.e., set up their environments, apply patches, and run tests), we provide a [fork](https://github.com/SWE-rebench/SWE-bench-fork) of the original SWE-bench execution framework, adapted for our dataset's structure and features. |
|
Our fork is based on the SWE-bench framework, specifically from its `Release 4.0.3`. The primary modification introduces functionality to source environment installation constants directly from the `install_config` field present in each task instance within SWE-rebench. This allows for more flexible and task-specific environment setups. |
|
|
|
You can find the details of this modification in the |
|
[following commit:](https://github.com/SWE-rebench/SWE-bench-fork/commit/980d0cca8aa4e73f1d9f894e906370bef8c4de8a) |
|
|
|
To build the necessary Docker images and run agents on SWE-rebench tasks, you have two main options: |
|
|
|
1. **Use our SWE-bench fork directly:** Clone the fork and utilize its scripts for building images and executing tasks. The framework will automatically use the `install_config` from each task. |
|
2. **Integrate similar functionality into your existing codebase:** If you have your own execution framework based on SWE-bench or a different system, you can adapt it by implementing a similar mechanism to parse and utilize the `install_config` field from the SWE-rebench task instances. The aforementioned commit can serve as a reference for this integration. |
|
|
|
# License |
|
The dataset is licensed under the Creative Commons Attribution 4.0 license. However, please respect the license of each specific repository on which a particular instance is based. To facilitate this, the license of each repository at the time of the commit is provided for every instance. |
|
|
|
# Citation |
|
|
|
```bibtex |
|
@misc{badertdinov2025swerebenchautomatedpipelinetask, |
|
title={SWE-rebench: An Automated Pipeline for Task Collection and Decontaminated Evaluation of Software Engineering Agents}, |
|
author={Ibragim Badertdinov and Alexander Golubev and Maksim Nekrashevich and Anton Shevtsov and Simon Karasik and Andrei Andriushchenko and Maria Trofimova and Daria Litvintseva and Boris Yangel}, |
|
year={2025}, |
|
eprint={2505.20411}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.SE}, |
|
url={https://arxiv.org/abs/2505.20411} |
|
} |