Training Deeper Neural Machine Translation Models with Transparent Attention

Ankur Bapna, Mia Chen, Orhan Firat, Yuan Cao, Yonghui Wu


Abstract
While current state-of-the-art NMT models, such as RNN seq2seq and Transformers, possess a large number of parameters, they are still shallow in comparison to convolutional models used for both text and vision applications. In this work we attempt to train significantly (2-3x) deeper Transformer and Bi-RNN encoders for machine translation. We propose a simple modification to the attention mechanism that eases the optimization of deeper models, and results in consistent gains of 0.7-1.1 BLEU on the benchmark WMT’14 English-German and WMT’15 Czech-English tasks for both architectures.
Anthology ID:
D18-1338
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
3028–3033
Language:
URL:
https://aclanthology.org/D18-1338
DOI:
10.18653/v1/D18-1338
Bibkey:
Cite (ACL):
Ankur Bapna, Mia Chen, Orhan Firat, Yuan Cao, and Yonghui Wu. 2018. Training Deeper Neural Machine Translation Models with Transparent Attention. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3028–3033, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Training Deeper Neural Machine Translation Models with Transparent Attention (Bapna et al., EMNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-1/D18-1338.pdf