File size: 2,575 Bytes
5987677 3b9d6f5 bf630f0 3b9d6f5 bf630f0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
---
license: mit
language:
- de
base_model:
- deepset/gbert-large
---
# Autor-Regulatory Focus Classifier (German)
This model is a fine-tuned transformer-based classifier that detects the **regulatory focus** in German-language text, classifying whether the language expresses a **promotion** (aspirational, growth-oriented) or **prevention** (safety, obligation-oriented) focus.
It is fine-tuned on top of a German-language base model for the task of binary text classification.
## Model Details
- **Base model**: `deepset/gbert-large`
- **Fine-tuned for**: Binary classification (Regulatory Focus)
- **Language**: German
- **Framework**: Hugging Face Transformers
- **Model format**: `safetensors`
## Use Cases
- Social psychology and communication research
- Marketing and consumer behavior analysis
- Literary or political discourse analysis
- Behavioral modeling and goal orientation profiling
## Example Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained("aveluth/author_regulatory_focus_classifier")
tokenizer = AutoTokenizer.from_pretrained("aveluth/author_regulatory_focus_classifier")
text = "Wir müssen sicherstellen, dass keine Fehler passieren. Sicherheit hat höchste Priorität."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
predicted_class = torch.argmax(outputs.logits).item()
print("Predicted class:", "prevention" if predicted_class == 0 else "promotion")
```
## Labels
| Class | Description |
|-------------|----------------------------------------|
| `0` | Prevention-focused language |
| `1` | Promotion-focused language |
## Training Details
- **Training data**: Custom labeled corpus based on psychological framing
- **Loss function**: Cross-entropy
- **Optimizer**: AdamW
- **Epochs**: 4
- **Learning rate**: 3e-5
## Limitations
- Trained on German-language data only
- Performance may vary on out-of-domain text (e.g., technical manuals, poetry)
- May not generalize across all cultural framings of regulatory focus
## License
[MIT](LICENSE)
## Citation
If you use this model in your research, please cite:
```bibtex
@article{velutharambath2023prevention,
title={Prevention or Promotion? Predicting Author's Regulatory Focus},
author={Velutharambath, Aswathy and Sassenberg, Kai and Klinger, Roman},
journal={Northern European Journal of Language Technology},
volume={9},
number={1},
year={2023}
}
```
|