Datasets:
ArXiv:
License:
added format and usage sections
Browse files
README.md
CHANGED
|
@@ -3,3 +3,58 @@ license: other
|
|
| 3 |
---
|
| 4 |
|
| 5 |
This version of the dataset is strictly permitted for use exclusively in conjunction with the review process for the paper with Submission Number 13449. Upon completion of the review process, a de-anonymized version of the dataset will be released under a license similar to that of The Stack, which can be found at https://huggingface.co/datasets/bigcode/the-stack.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
This version of the dataset is strictly permitted for use exclusively in conjunction with the review process for the paper with Submission Number 13449. Upon completion of the review process, a de-anonymized version of the dataset will be released under a license similar to that of The Stack, which can be found at https://huggingface.co/datasets/bigcode/the-stack.
|
| 6 |
+
|
| 7 |
+
|
| 8 |
+
## Dataset Format
|
| 9 |
+
The dataset contains 4 different subdataset or configurations in HuggingFace Datasets terminology. Those are `bm25_contexts` `PP_contexts` `randomNN_contexts` and `sources`.
|
| 10 |
+
|
| 11 |
+
First 3 are data used to train and test Repo fusion and the last one is actual java sourcode files the date was taken from.
|
| 12 |
+
|
| 13 |
+
The format of the data for firt 3 dataset is as follows:
|
| 14 |
+
```
|
| 15 |
+
features = datasets.Features({
|
| 16 |
+
'id': datasets.Value('string'),
|
| 17 |
+
'hole_file': datasets.Value('string'),
|
| 18 |
+
'hole_line': datasets.Value('int32'),
|
| 19 |
+
'hole_pos': datasets.Value('int32'),
|
| 20 |
+
'question': datasets.Value('string'),
|
| 21 |
+
'target': datasets.Value('string'),
|
| 22 |
+
'answers': datasets.Sequence(
|
| 23 |
+
datasets.Value('string')
|
| 24 |
+
),
|
| 25 |
+
'ctxs': [{
|
| 26 |
+
'title': datasets.Value('string'),
|
| 27 |
+
'text': datasets.Value('string'),
|
| 28 |
+
'score': datasets.Value('float64')
|
| 29 |
+
}]
|
| 30 |
+
})
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
The format of the `sources` is either as follows if accessed through Datasets.load_dataset:
|
| 34 |
+
```
|
| 35 |
+
features = datasets.Features({
|
| 36 |
+
'file': datasets.Value('string'),
|
| 37 |
+
'content': datasets.Value('string')
|
| 38 |
+
})
|
| 39 |
+
```
|
| 40 |
+
Or, it can be accessed via file system directly. The format is like this `[<data_set_root>/data/<split_name>/<github_user>/<repo_name>/<path/to/every/java/file/in/the/repo>.java]`
|
| 41 |
+
|
| 42 |
+
Therea are 3 splits for each configuration `train`, `test`, `validation`
|
| 43 |
+
|
| 44 |
+
## Dataset usage
|
| 45 |
+
First, please, clone the dataset locally
|
| 46 |
+
```
|
| 47 |
+
git clone https://huggingface.co/datasets/RepoFusion/Stack-Repo <local/path/to/manual/data>
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
Second, please, load the dataset desired configuration and split:
|
| 51 |
+
```
|
| 52 |
+
ds = datasets.load_dataset(
|
| 53 |
+
"RepoFusion/Stack-Repo",
|
| 54 |
+
name="<configuration_name>",
|
| 55 |
+
split="<split_name>"
|
| 56 |
+
data_dir="<local/path/to/manual/data>"
|
| 57 |
+
)
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
NOTE: `bm25_contexts` `PP_contexts` `randomNN_contexts` configrations can be loaded directly from the hub without cloning the repo locally. For the `sources` if not clonned beforehand or `data_dir` not specified, `ManualDownloadError` will be raised.
|