The Hidden Attention of Mamba Models

Ameen Ali Ali, Itamar Zimerman, Lior Wolf


Abstract
The Mamba layer offers an efficient selective state-space model (SSM) that is highly effective in modeling multiple domains, includingNLP, long-range sequence processing, and computer vision. Selective SSMs are viewed as dual models, in which one trains in parallel on the entire sequence via an IO-aware parallel scan, and deploys in an autoregressive manner. We add a third view and show that such models can be viewed as attention-driven models. This new perspective enables us to empirically and theoretically compare the underlying mechanisms to that of the attention in transformers and allows us to peer inside the inner workings of the Mamba model with explainability methods. Our code is publicly available.
Anthology ID:
2025.acl-long.76
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1516–1534
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.76/
DOI:
Bibkey:
Cite (ACL):
Ameen Ali Ali, Itamar Zimerman, and Lior Wolf. 2025. The Hidden Attention of Mamba Models. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1516–1534, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
The Hidden Attention of Mamba Models (Ali et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.76.pdf