fabiosalern commited on
Commit
3648daf
·
verified ·
1 Parent(s): a89032e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -212,10 +212,10 @@ size_categories:
212
  - 1K<n<10K
213
  ---
214
 
215
- This dataset consists of the attack samples used for the paper: Extracting training data from fine-tuned Large Language Models for code
216
 
217
  We have two splits:
218
- - The `fine-tuning attack`, which consists of selected samples coming from the **[fine-tuning set](https://huggingface.co/datasets/fabiosalern/MEM-TUNE_Java)**
219
  - The `pre-training attack`, which consists of selected samples coming from the **[TheStack-v2](https://huggingface.co/datasets/bigcode/the-stack-v2)** on the Java section
220
 
221
  We have different splits depending on the duplication rate of the samples:
 
212
  - 1K<n<10K
213
  ---
214
 
215
+ This dataset consists of the attack samples used for the paper "How Much Do Code Language Models Remember? An Investigation on Data Extraction Attacks before and after Fine-tuning"
216
 
217
  We have two splits:
218
+ - The `fine-tuning attack`, which consists of selected samples coming from the **[fine-tuning set](AISE-TUDelft/memtune-tuning_data)**
219
  - The `pre-training attack`, which consists of selected samples coming from the **[TheStack-v2](https://huggingface.co/datasets/bigcode/the-stack-v2)** on the Java section
220
 
221
  We have different splits depending on the duplication rate of the samples: