Abstract
Current state-of-the-art machine translation systems are based on encoder-decoder architectures, that first encode the input sequence, and then generate an output sequence based on the input encoding. Both are interfaced with an attention mechanism that recombines a fixed encoding of the source tokens based on the decoder state. We propose an alternative approach which instead relies on a single 2D convolutional neural network across both sequences. Each layer of our network re-codes source tokens on the basis of the output sequence produced so far. Attention-like properties are therefore pervasive throughout the network. Our model yields excellent results, outperforming state-of-the-art encoder-decoder systems, while being conceptually simpler and having fewer parameters.- Anthology ID:
- K18-1010
- Volume:
- Proceedings of the 22nd Conference on Computational Natural Language Learning
- Month:
- October
- Year:
- 2018
- Address:
- Brussels, Belgium
- Editors:
- Anna Korhonen, Ivan Titov
- Venue:
- CoNLL
- SIG:
- SIGNLL
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 97–107
- Language:
- URL:
- https://aclanthology.org/K18-1010
- DOI:
- 10.18653/v1/K18-1010
- Cite (ACL):
- Maha Elbayad, Laurent Besacier, and Jakob Verbeek. 2018. Pervasive Attention: 2D Convolutional Neural Networks for Sequence-to-Sequence Prediction. In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 97–107, Brussels, Belgium. Association for Computational Linguistics.
- Cite (Informal):
- Pervasive Attention: 2D Convolutional Neural Networks for Sequence-to-Sequence Prediction (Elbayad et al., CoNLL 2018)
- PDF:
- https://preview.aclanthology.org/improve-issue-templates/K18-1010.pdf
- Code
- elbayadm/attn2d + additional community code