deliciouscat's picture
Update README.md
e2ca893 verified
|
raw
history blame
721 Bytes

Encoder-Decoder model with DeBERTa decoder

pre-trained models

Encoder: microsoft/deberta-v3-small

Decoder: deliciouscat/deberta-v3-base-decoder-v0.1; 6 transformer layers, 8 attention heads

Data used

HuggingFaceFW/fineweb -> sampled 124800

Training hparams

optimizer: AdamW, lr=2.3e-5, betas=(0.875, 0.997) batch size: 12 (maximal on Colab pro A100 env)

How to use

from transformers import AutoTokenizer, EncoderDecoderModel

model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail")
tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail")

Future work!

train more scientific data

fine-tune on keyword extraction task