Abstract
Transformer-based pre-trained models, such as BERT, have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications. However, deploying these models can be prohibitively costly, as the standard self-attention mechanism of the Transformer suffers from quadratic computational cost in the input sequence length. To confront this, we propose FCA, a fine- and coarse-granularity hybrid self-attention that reduces the computation cost through progressively shortening the computational sequence length in self-attention. Specifically, FCA conducts an attention-based scoring strategy to determine the informativeness of tokens at each layer. Then, the informative tokens serve as the fine-granularity computing units in self-attention and the uninformative tokens are replaced with one or several clusters as the coarse-granularity computing units in self-attention. Experiments on the standard GLUE benchmark show that BERT with FCA achieves 2x reduction in FLOPs over original BERT with <1% loss in accuracy. We show that FCA offers a significantly better trade-off between accuracy and FLOPs compared to prior methods.- Anthology ID:
- 2022.acl-long.330
- Volume:
- Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Editors:
- Smaranda Muresan, Preslav Nakov, Aline Villavicencio
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4811–4820
- Language:
- URL:
- https://aclanthology.org/2022.acl-long.330
- DOI:
- 10.18653/v1/2022.acl-long.330
- Cite (ACL):
- Jing Zhao, Yifan Wang, Junwei Bao, Youzheng Wu, and Xiaodong He. 2022. Fine- and Coarse-Granularity Hybrid Self-Attention for Efficient BERT. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4811–4820, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- Fine- and Coarse-Granularity Hybrid Self-Attention for Efficient BERT (Zhao et al., ACL 2022)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-3/2022.acl-long.330.pdf
- Code
- pierre-zhao/fca-bert
- Data
- GLUE, QNLI, RACE