Modeling Past and Future for Neural Machine Translation

Zaixiang Zheng, Hao Zhou, Shujian Huang, Lili Mou, Xinyu Dai, Jiajun Chen, Zhaopeng Tu


Abstract
Existing neural machine translation systems do not explicitly model what has been translated and what has not during the decoding phase. To address this problem, we propose a novel mechanism that separates the source information into two parts: translated Past contents and untranslated Future contents, which are modeled by two additional recurrent layers. The Past and Future contents are fed to both the attention model and the decoder states, which provides Neural Machine Translation (NMT) systems with the knowledge of translated and untranslated contents. Experimental results show that the proposed approach significantly improves the performance in Chinese-English, German-English, and English-German translation tasks. Specifically, the proposed model outperforms the conventional coverage model in terms of both the translation quality and the alignment error rate.
Anthology ID:
Q18-1011
Volume:
Transactions of the Association for Computational Linguistics, Volume 6
Month:
Year:
2018
Address:
Cambridge, MA
Editors:
Lillian Lee, Mark Johnson, Kristina Toutanova, Brian Roark
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
145–157
Language:
URL:
https://aclanthology.org/Q18-1011
DOI:
10.1162/tacl_a_00011
Bibkey:
Cite (ACL):
Zaixiang Zheng, Hao Zhou, Shujian Huang, Lili Mou, Xinyu Dai, Jiajun Chen, and Zhaopeng Tu. 2018. Modeling Past and Future for Neural Machine Translation. Transactions of the Association for Computational Linguistics, 6:145–157.
Cite (Informal):
Modeling Past and Future for Neural Machine Translation (Zheng et al., TACL 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/Q18-1011.pdf
Code
 zhengzx-nlp/past-and-future-nmt