There’s No Data like Better Data: Using QE Metrics for MT Data Filtering
Jan-Thorsten Peter, David Vilar, Daniel Deutsch, Mara Finkelstein, Juraj Juraska, Markus Freitag
Abstract
Quality Estimation (QE), the evaluation of machine translation output without the need of explicit references, has seen big improvements in the last years with the use of neural metrics. In this paper we analyze the viability of using QE metrics for filtering out bad quality sentence pairs in the training data of neural machine translation systems (NMT). While most corpus filtering methods are focused on detecting noisy examples in collections of texts, usually huge amounts of web crawled data, QE models are trained to discriminate more fine-grained quality differences. We show that by selecting the highest quality sentence pairs in the training data, we can improve translation quality while reducing the training size by half. We also provide a detailed analysis of the filtering results, which highlights the differences between both approaches.- Anthology ID:
- 2023.wmt-1.50
- Volume:
- Proceedings of the Eighth Conference on Machine Translation
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Philipp Koehn, Barry Haddow, Tom Kocmi, Christof Monz
- Venue:
- WMT
- SIG:
- SIGMT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 561–577
- Language:
- URL:
- https://aclanthology.org/2023.wmt-1.50
- DOI:
- 10.18653/v1/2023.wmt-1.50
- Cite (ACL):
- Jan-Thorsten Peter, David Vilar, Daniel Deutsch, Mara Finkelstein, Juraj Juraska, and Markus Freitag. 2023. There’s No Data like Better Data: Using QE Metrics for MT Data Filtering. In Proceedings of the Eighth Conference on Machine Translation, pages 561–577, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- There’s No Data like Better Data: Using QE Metrics for MT Data Filtering (Peter et al., WMT 2023)
- PDF:
- https://preview.aclanthology.org/landing_page/2023.wmt-1.50.pdf