Better Quality Estimation for Low Resource Corpus Mining

Muhammed Kocyigit, Jiho Lee, Derry Wijaya


Abstract
Quality Estimation (QE) models have the potential to change how we evaluate and maybe even train machine translation models. However, these models still lack the robustness to achieve general adoption. We show that Stateof-the-art QE models, when tested in a Parallel Corpus Mining (PCM) setting, perform unexpectedly bad due to a lack of robustness to out-of-domain examples. We propose a combination of multitask training, data augmentation and contrastive learning to achieve better and more robust QE performance. We show that our method improves QE performance significantly in the MLQE challenge and the robustness of QE models when tested in the Parallel Corpus Mining setup. We increase the accuracy in PCM by more than 0.80, making it on par with state-of-the-art PCM methods that use millions of sentence pairs to train their models. In comparison, we use a thousand times less data, 7K parallel sentences in total, and propose a novel low resource PCM method.
Anthology ID:
2022.findings-acl.45
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
533–543
Language:
URL:
https://aclanthology.org/2022.findings-acl.45
DOI:
10.18653/v1/2022.findings-acl.45
Bibkey:
Cite (ACL):
Muhammed Kocyigit, Jiho Lee, and Derry Wijaya. 2022. Better Quality Estimation for Low Resource Corpus Mining. In Findings of the Association for Computational Linguistics: ACL 2022, pages 533–543, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Better Quality Estimation for Low Resource Corpus Mining (Kocyigit et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2022.findings-acl.45.pdf
Software:
 2022.findings-acl.45.software.zip
Video:
 https://preview.aclanthology.org/naacl24-info/2022.findings-acl.45.mp4
Data
GLUEMLQEMultiNLI