MTEQA at WMT21 Metrics Shared Task
Mateusz Krubiński, Erfan Ghadery, Marie-Francine Moens, Pavel Pecina
Abstract
In this paper, we describe our submission to the WMT 2021 Metrics Shared Task. We use the automatically-generated questions and answers to evaluate the quality of Machine Translation (MT) systems. Our submission builds upon the recently proposed MTEQA framework. Experiments on WMT20 evaluation datasets show that at the system-level the MTEQA metric achieves performance comparable with other state-of-the-art solutions, while considering only a certain amount of information from the whole translation.- Anthology ID:
- 2021.wmt-1.110
- Volume:
- Proceedings of the Sixth Conference on Machine Translation
- Month:
- November
- Year:
- 2021
- Address:
- Online
- Venue:
- WMT
- SIG:
- SIGMT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1024–1029
- Language:
- URL:
- https://aclanthology.org/2021.wmt-1.110
- DOI:
- Cite (ACL):
- Mateusz Krubiński, Erfan Ghadery, Marie-Francine Moens, and Pavel Pecina. 2021. MTEQA at WMT21 Metrics Shared Task. In Proceedings of the Sixth Conference on Machine Translation, pages 1024–1029, Online. Association for Computational Linguistics.
- Cite (Informal):
- MTEQA at WMT21 Metrics Shared Task (Krubiński et al., WMT 2021)
- PDF:
- https://preview.aclanthology.org/remove-xml-comments/2021.wmt-1.110.pdf