Widening the Representation Bottleneck in Neural Machine Translation with Lexical Shortcuts

Denis Emelin, Ivan Titov, Rico Sennrich


Abstract
The transformer is a state-of-the-art neural translation model that uses attention to iteratively refine lexical representations with information drawn from the surrounding context. Lexical features are fed into the first layer and propagated through a deep network of hidden layers. We argue that the need to represent and propagate lexical features in each layer limits the model’s capacity for learning and representing other information relevant to the task. To alleviate this bottleneck, we introduce gated shortcut connections between the embedding layer and each subsequent layer within the encoder and decoder. This enables the model to access relevant lexical content dynamically, without expending limited resources on storing it within intermediate states. We show that the proposed modification yields consistent improvements over a baseline transformer on standard WMT translation tasks in 5 translation directions (0.9 BLEU on average) and reduces the amount of lexical information passed along the hidden layers. We furthermore evaluate different ways to integrate lexical connections into the transformer architecture and present ablation experiments exploring the effect of proposed shortcuts on model behavior.
Anthology ID:
W19-5211
Volume:
Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers)
Month:
August
Year:
2019
Address:
Florence, Italy
Editors:
Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, André Martins, Christof Monz, Matteo Negri, Aurélie Névéol, Mariana Neves, Matt Post, Marco Turchi, Karin Verspoor
Venue:
WMT
SIG:
SIGMT
Publisher:
Association for Computational Linguistics
Note:
Pages:
102–115
Language:
URL:
https://preview.aclanthology.org/build-pipeline-with-new-library/W19-5211/
DOI:
10.18653/v1/W19-5211
Bibkey:
Cite (ACL):
Denis Emelin, Ivan Titov, and Rico Sennrich. 2019. Widening the Representation Bottleneck in Neural Machine Translation with Lexical Shortcuts. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 102–115, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Widening the Representation Bottleneck in Neural Machine Translation with Lexical Shortcuts (Emelin et al., WMT 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/build-pipeline-with-new-library/W19-5211.pdf
Code
 demelin/transformer_lexical_shortcuts