Update README.md
Browse files
README.md
CHANGED
@@ -212,10 +212,10 @@ size_categories:
|
|
212 |
- 1K<n<10K
|
213 |
---
|
214 |
|
215 |
-
This dataset consists of the attack samples used for the paper
|
216 |
|
217 |
We have two splits:
|
218 |
-
- The `fine-tuning attack`, which consists of selected samples coming from the **[fine-tuning set](
|
219 |
- The `pre-training attack`, which consists of selected samples coming from the **[TheStack-v2](https://huggingface.co/datasets/bigcode/the-stack-v2)** on the Java section
|
220 |
|
221 |
We have different splits depending on the duplication rate of the samples:
|
|
|
212 |
- 1K<n<10K
|
213 |
---
|
214 |
|
215 |
+
This dataset consists of the attack samples used for the paper "How Much Do Code Language Models Remember? An Investigation on Data Extraction Attacks before and after Fine-tuning"
|
216 |
|
217 |
We have two splits:
|
218 |
+
- The `fine-tuning attack`, which consists of selected samples coming from the **[fine-tuning set](AISE-TUDelft/memtune-tuning_data)**
|
219 |
- The `pre-training attack`, which consists of selected samples coming from the **[TheStack-v2](https://huggingface.co/datasets/bigcode/the-stack-v2)** on the Java section
|
220 |
|
221 |
We have different splits depending on the duplication rate of the samples:
|