The Highs and Lows of Simple Lexical Domain Adaptation Approaches for Neural Machine Translation

Nikolay Bogoychev, Pinzhen Chen


Abstract
Machine translation systems are vulnerable to domain mismatch, especially in a low-resource scenario. Out-of-domain translations are often of poor quality and prone to hallucinations, due to exposure bias and the decoder acting as a language model. We adopt two approaches to alleviate this problem: lexical shortlisting restricted by IBM statistical alignments, and hypothesis reranking based on similarity. The methods are computationally cheap and show success on low-resource out-of-domain test sets. However, the methods lose advantage when there is sufficient data or too great domain mismatch. This is due to both the IBM model losing its advantage over the implicitly learned neural alignment, and issues with subword segmentation of unseen words.
Anthology ID:
2021.insights-1.12
Volume:
Proceedings of the Second Workshop on Insights from Negative Results in NLP
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
João Sedoc, Anna Rogers, Anna Rumshisky, Shabnam Tafreshi
Venue:
insights
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
74–80
Language:
URL:
https://aclanthology.org/2021.insights-1.12
DOI:
10.18653/v1/2021.insights-1.12
Bibkey:
Cite (ACL):
Nikolay Bogoychev and Pinzhen Chen. 2021. The Highs and Lows of Simple Lexical Domain Adaptation Approaches for Neural Machine Translation. In Proceedings of the Second Workshop on Insights from Negative Results in NLP, pages 74–80, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
The Highs and Lows of Simple Lexical Domain Adaptation Approaches for Neural Machine Translation (Bogoychev & Chen, insights 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2021.insights-1.12.pdf
Video:
 https://preview.aclanthology.org/dois-2013-emnlp/2021.insights-1.12.mp4
Code
 marian-nmt/marian