Abstract
Natural languages are believed to be (mildly) context-sensitive. Despite underpinning remarkably capable large language models, transformers are unable to model many context-free language tasks. In an attempt to address this limitation in the modeling power of transformer-based language models, we propose augmenting them with a differentiable, stack-based attention mechanism. Our stack-basedattention mechanism can be incorporated into any transformer-based language model and adds a level of interpretability to the model. We show that the addition of our stack-based attention mechanism enables the transformer to model some, but not all, deterministic context-freelanguages.- Anthology ID:
- 2024.findings-naacl.269
- Volume:
- Findings of the Association for Computational Linguistics: NAACL 2024
- Month:
- June
- Year:
- 2024
- Address:
- Mexico City, Mexico
- Editors:
- Kevin Duh, Helena Gomez, Steven Bethard
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4318–4335
- Language:
- URL:
- https://aclanthology.org/2024.findings-naacl.269
- DOI:
- 10.18653/v1/2024.findings-naacl.269
- Cite (ACL):
- Jiaoda Li, Jennifer White, Mrinmaya Sachan, and Ryan Cotterell. 2024. A Transformer with Stack Attention. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 4318–4335, Mexico City, Mexico. Association for Computational Linguistics.
- Cite (Informal):
- A Transformer with Stack Attention (Li et al., Findings 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2024.findings-naacl.269.pdf