Word Rewarding for Adequate Neural Machine Translation

Yuto Takebayashi, Chu Chenhui, Yuki Arase†, Masaaki Nagata


Abstract
To improve the translation adequacy in neural machine translation (NMT), we propose a rewarding model with target word prediction using bilingual dictionaries inspired by the success of decoder constraints in statistical machine translation. In particular, the model first predicts a set of target words promising for translation; then boosts the probabilities of the predicted words to give them better chances to be output. Our rewarding model minimally interacts with the decoder so that it can be easily applied to the decoder of an existing NMT system. Extensive evaluation under both resource-rich and resource-poor settings shows that (1) BLEU score improves more than 10 points with oracle prediction, (2) BLEU score improves about 1.0 point with target word prediction using bilingual dictionaries created either manually or automatically, (3) hyper-parameters of our model are relatively easy to optimize, and (4) undergeneration problem can be alleviated in exchange for increasing over-generated words.
Anthology ID:
2018.iwslt-1.3
Volume:
Proceedings of the 15th International Conference on Spoken Language Translation
Month:
October 29-30
Year:
2018
Address:
Brussels
Venue:
IWSLT
SIG:
SIGSLT
Publisher:
International Conference on Spoken Language Translation
Note:
Pages:
14–22
Language:
URL:
https://aclanthology.org/2018.iwslt-1.3
DOI:
Bibkey:
Cite (ACL):
Yuto Takebayashi, Chu Chenhui, Yuki Arase†, and Masaaki Nagata. 2018. Word Rewarding for Adequate Neural Machine Translation. In Proceedings of the 15th International Conference on Spoken Language Translation, pages 14–22, Brussels. International Conference on Spoken Language Translation.
Cite (Informal):
Word Rewarding for Adequate Neural Machine Translation (Takebayashi et al., IWSLT 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2018.iwslt-1.3.pdf
Data
ASPEC