Conv-Basis: A New Paradigm for Efficient Attention Inference and Gradient Computation in Transformers
Yingyu Liang, Heshan Liu, Zhenmei Shi, Zhao Song, Zhuoyan Xu, Jiale Zhao, Zhen Zhuang
Abstract
The self-attention mechanism is key to the success of transformers in recent large language models (LLMs). However, the quadratic computational cost, O(n2), with respect to the input sequence length n poses a significant obstacle to further improvement and scalability in longer contexts.In this work, we leverage the convolution-like structure of attention matrices to develop an efficient approximation method for attention computation using convolution matrices. We propose a \mathsf{conv} basis system, analogous to the rank basis, and show that any lower triangular matrix can be decomposed as a sum of structured convolution matrices in this basis. We then design a fast algorithm to approximate the attention matrix using a sum of k convolution matrices. This enables us to compute attention during inference via Fast Fourier Transforms (FFT) in O(knd log n) time, where d is the hidden dimension, achieving nearly linear time complexity, n1+o(1), in practical scenarios where kd = no(1). Furthermore, both training forward and backward gradient computations can be performed in n1+o(1) time as well.We provide theoretical guarantees on runtime and approximation error and conduct preliminary experiments to evaluate the effectiveness of our approach. We hope this new paradigm for accelerating attention computation in transformer models facilitates their application to longer contexts.- Anthology ID:
- 2025.findings-emnlp.363
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2025
- Month:
- November
- Year:
- 2025
- Address:
- Suzhou, China
- Editors:
- Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 6857–6894
- Language:
- URL:
- https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.363/
- DOI:
- 10.18653/v1/2025.findings-emnlp.363
- Cite (ACL):
- Yingyu Liang, Heshan Liu, Zhenmei Shi, Zhao Song, Zhuoyan Xu, Jiale Zhao, and Zhen Zhuang. 2025. Conv-Basis: A New Paradigm for Efficient Attention Inference and Gradient Computation in Transformers. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 6857–6894, Suzhou, China. Association for Computational Linguistics.
- Cite (Informal):
- Conv-Basis: A New Paradigm for Efficient Attention Inference and Gradient Computation in Transformers (Liang et al., Findings 2025)
- PDF:
- https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.363.pdf