Update README.md
#1
by
MiriUll
- opened
README.md
CHANGED
|
@@ -1,3 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
```python
|
| 2 |
import torch
|
| 3 |
from transformers import AutoTokenizer
|
|
@@ -30,4 +39,15 @@ for key, value in test_input.items():
|
|
| 30 |
|
| 31 |
outputs = model.generate(**test_input, num_beams=3, max_length=1024)
|
| 32 |
decoder_tokenizer.batch_decode(outputs)
|
| 33 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
language:
|
| 4 |
+
- de
|
| 5 |
+
---
|
| 6 |
+
# German text simplification with custom decoder
|
| 7 |
+
This model was initialized from an mBART model and the decoder was replaced by a GPT2 language model pre-trained for German Easy Language. For more details, visit our [Github repository](https://github.com/MiriUll/Language-Models-German-Simplification).
|
| 8 |
+
|
| 9 |
+
## Usage
|
| 10 |
```python
|
| 11 |
import torch
|
| 12 |
from transformers import AutoTokenizer
|
|
|
|
| 39 |
|
| 40 |
outputs = model.generate(**test_input, num_beams=3, max_length=1024)
|
| 41 |
decoder_tokenizer.batch_decode(outputs)
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
## Citation
|
| 45 |
+
If you use our mode, please cite:
|
| 46 |
+
@misc{anschütz2023language,
|
| 47 |
+
  title={Language Models for German Text Simplification: Overcoming Parallel Data Scarcity through Style-specific Pre-training},
|
| 48 |
+
  author={Miriam Anschütz and Joshua Oehms and Thomas Wimmer and Bartłomiej Jezierski and Georg Groh},
|
| 49 |
+
  year={2023},
|
| 50 |
+
  eprint={2305.12908},
|
| 51 |
+
  archivePrefix={arXiv},
|
| 52 |
+
  primaryClass={cs.CL}
|
| 53 |
+
}
|