Abstract
Recent studies of the computational power of recurrent neural networks (RNNs) reveal a hierarchy of RNN architectures, given real-time and finite-precision assumptions. Here we study auto-regressive Transformers with linearised attention, a.k.a. linear Transformers (LTs) or Fast Weight Programmers (FWPs). LTs are special in the sense that they are equivalent to RNN-like sequence processors with a fixed-size state, while they can also be expressed as the now-popular self-attention networks. We show that many well-known results for the standard Transformer directly transfer to LTs/FWPs. Our formal language recognition experiments demonstrate how recently proposed FWP extensions such as recurrent FWPs and self-referential weight matrices successfully overcome certain limitations of the LT, e.g., allowing for generalisation on the parity problem. Our code is public.- Anthology ID:
- 2023.emnlp-main.588
- Volume:
- Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 9455–9465
- Language:
- URL:
- https://aclanthology.org/2023.emnlp-main.588
- DOI:
- 10.18653/v1/2023.emnlp-main.588
- Cite (ACL):
- Kazuki Irie, Róbert Csordás, and Jürgen Schmidhuber. 2023. Practical Computational Power of Linear Transformers and Their Recurrent and Self-Referential Extensions. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9455–9465, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- Practical Computational Power of Linear Transformers and Their Recurrent and Self-Referential Extensions (Irie et al., EMNLP 2023)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/2023.emnlp-main.588.pdf