Learning to Parse and Translate Improves Neural Machine Translation

Akiko Eriguchi, Yoshimasa Tsuruoka, Kyunghyun Cho


Abstract
There has been relatively little attention to incorporating linguistic prior to neural machine translation. Much of the previous work was further constrained to considering linguistic prior on the source side. In this paper, we propose a hybrid model, called NMT+RNNG, that learns to parse and translate by combining the recurrent neural network grammar into the attention-based neural machine translation. Our approach encourages the neural machine translation model to incorporate linguistic prior during training, and lets it translate on its own afterward. Extensive experiments with four language pairs show the effectiveness of the proposed NMT+RNNG.
Anthology ID:
P17-2012
Volume:
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2017
Address:
Vancouver, Canada
Editors:
Regina Barzilay, Min-Yen Kan
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
72–78
Language:
URL:
https://aclanthology.org/P17-2012
DOI:
10.18653/v1/P17-2012
Bibkey:
Cite (ACL):
Akiko Eriguchi, Yoshimasa Tsuruoka, and Kyunghyun Cho. 2017. Learning to Parse and Translate Improves Neural Machine Translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 72–78, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Learning to Parse and Translate Improves Neural Machine Translation (Eriguchi et al., ACL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/P17-2012.pdf
Video:
 https://preview.aclanthology.org/naacl24-info/P17-2012.mp4
Code
 tempra28/nmtrnng