Update README.md
Browse filesThis dataset consists of the attack samples used for the paper: Extracting training data from fine-tuned Large Language Models for code
We have two splits:
- The `fine-tuning attack`, which consists of selected samples coming from the **[fine-tuning set](https://huggingface.co/datasets/fabiosalern/MEM-TUNE_Java)**
- The `pre-training attack`, which consists of selected samples coming from the **[TheStack-v2](https://huggingface.co/datasets/bigcode/the-stack-v2)** on the Java section
We have different splits depending on the duplication rate of the samples:
- `d1` the samples inside the training set are unique
- `d2` the samples inside the training set are present two times
- `d3` the samples inside the training set are present three times
- `dg3` the samples inside the training set are present more than three times
@@ -206,4 +206,8 @@ configs:
|
|
206 |
path: pre-train/d3-*
|
207 |
- split: dg3
|
208 |
path: pre-train/dg3-*
|
209 |
-
|
|
|
|
|
|
|
|
|
|
206 |
path: pre-train/d3-*
|
207 |
- split: dg3
|
208 |
path: pre-train/dg3-*
|
209 |
+
tags:
|
210 |
+
- code
|
211 |
+
size_categories:
|
212 |
+
- 1K<n<10K
|
213 |
+
---
|