Document-Level Neural Machine Translation with Hierarchical Attention Networks
Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, James Henderson
Abstract
Neural Machine Translation (NMT) can be improved by including document-level contextual information. For this purpose, we propose a hierarchical attention model to capture the context in a structured and dynamic manner. The model is integrated in the original NMT architecture as another level of abstraction, conditioning on the NMT model’s own previous hidden states. Experiments show that hierarchical attention significantly improves the BLEU score over a strong NMT baseline with the state-of-the-art in context-aware methods, and that both the encoder and decoder benefit from context in complementary ways.- Anthology ID:
- D18-1325
- Volume:
- Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
- Month:
- October-November
- Year:
- 2018
- Address:
- Brussels, Belgium
- Editors:
- Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
- Venue:
- EMNLP
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2947–2954
- Language:
- URL:
- https://aclanthology.org/D18-1325
- DOI:
- 10.18653/v1/D18-1325
- Cite (ACL):
- Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-Level Neural Machine Translation with Hierarchical Attention Networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2947–2954, Brussels, Belgium. Association for Computational Linguistics.
- Cite (Informal):
- Document-Level Neural Machine Translation with Hierarchical Attention Networks (Miculicich et al., EMNLP 2018)
- PDF:
- https://preview.aclanthology.org/naacl24-info/D18-1325.pdf
- Code
- idiap/HAN_NMT + additional community code