Abstract
Transformer-based models have brought a radical change to neural machine translation. A key feature of the Transformer architecture is the so-called multi-head attention mechanism, which allows the model to focus simultaneously on different parts of the input. However, recent works have shown that most attention heads learn simple, and often redundant, positional patterns. In this paper, we propose to replace all but one attention head of each encoder layer with simple fixed – non-learnable – attentive patterns that are solely based on position and do not require any external knowledge. Our experiments with different data sizes and multiple language pairs show that fixing the attention heads on the encoder side of the Transformer at training time does not impact the translation quality and even increases BLEU scores by up to 3 points in low-resource scenarios.- Anthology ID:
- 2020.findings-emnlp.49
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2020
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Editors:
- Trevor Cohn, Yulan He, Yang Liu
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 556–568
- Language:
- URL:
- https://aclanthology.org/2020.findings-emnlp.49
- DOI:
- 10.18653/v1/2020.findings-emnlp.49
- Cite (ACL):
- Alessandro Raganato, Yves Scherrer, and Jörg Tiedemann. 2020. Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 556–568, Online. Association for Computational Linguistics.
- Cite (Informal):
- Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation (Raganato et al., Findings 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-3/2020.findings-emnlp.49.pdf