Matthews correlation coefficient is set as the best metric to train the word-level task.

Using the DCSQE framework, synthetic data is generated from the WMT2023 parallel corpus for pre-training, and then fine-tuned on the WMT2022 QE EN-DE training set, all implemented with the Fairseq framework.

For a detailed description of the DCSQE framework, please refer to the paper:
Alleviating Distribution Shift in Synthetic Data for Machine Translation Quality Estimation

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DreamW1ngs/DCSQE-ende-word

Finetuned
(657)
this model

Collection including DreamW1ngs/DCSQE-ende-word