Abstract
This work introduces a simple regressive ensemble for evaluating machine translation quality based on a set of novel and established metrics. We evaluate the ensemble using a correlation to expert-based MQM scores of the WMT 2021 Metrics workshop. In both monolingual and zero-shot cross-lingual settings, we show a significant performance improvement over single metrics. In the cross-lingual settings, we also demonstrate that an ensemble approach is well-applicable to unseen languages. Furthermore, we identify a strong reference-free baseline that consistently outperforms the commonly-used BLEU and METEOR measures and significantly improves our ensemble’s performance.- Anthology ID:
- 2021.wmt-1.112
- Volume:
- Proceedings of the Sixth Conference on Machine Translation
- Month:
- November
- Year:
- 2021
- Address:
- Online
- Editors:
- Loic Barrault, Ondrej Bojar, Fethi Bougares, Rajen Chatterjee, Marta R. Costa-jussa, Christian Federmann, Mark Fishel, Alexander Fraser, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Paco Guzman, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Tom Kocmi, Andre Martins, Makoto Morishita, Christof Monz
- Venue:
- WMT
- SIG:
- SIGMT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1041–1048
- Language:
- URL:
- https://aclanthology.org/2021.wmt-1.112
- DOI:
- Cite (ACL):
- Michal Stefanik, Vít Novotný, and Petr Sojka. 2021. Regressive Ensemble for Machine Translation Quality Evaluation. In Proceedings of the Sixth Conference on Machine Translation, pages 1041–1048, Online. Association for Computational Linguistics.
- Cite (Informal):
- Regressive Ensemble for Machine Translation Quality Evaluation (Stefanik et al., WMT 2021)
- PDF:
- https://preview.aclanthology.org/naacl-24-ws-corrections/2021.wmt-1.112.pdf
- Code
- mir-mu/regemt
- Data
- MLQE-PE