Blockwise Self-Attention for Long Document Understanding
Jiezhong Qiu, Hao Ma, Omer Levy, Wen-tau Yih, Sinong Wang, Jie Tang
Abstract
We present BlockBERT, a lightweight and efficient BERT model for better modeling long-distance dependencies. Our model extends BERT by introducing sparse block structures into the attention matrix to reduce both memory consumption and training/inference time, which also enables attention heads to capture either short- or long-range contextual information. We conduct experiments on language model pre-training and several benchmark question answering datasets with various paragraph lengths. BlockBERT uses 18.7-36.1% less memory and 12.0-25.1% less time to learn the model. During testing, BlockBERT saves 27.8% inference time, while having comparable and sometimes better prediction accuracy, compared to an advanced BERT-based model, RoBERTa.- Anthology ID:
- 2020.findings-emnlp.232
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2020
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2555–2565
- Language:
- URL:
- https://aclanthology.org/2020.findings-emnlp.232
- DOI:
- 10.18653/v1/2020.findings-emnlp.232
- Cite (ACL):
- Jiezhong Qiu, Hao Ma, Omer Levy, Wen-tau Yih, Sinong Wang, and Jie Tang. 2020. Blockwise Self-Attention for Long Document Understanding. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2555–2565, Online. Association for Computational Linguistics.
- Cite (Informal):
- Blockwise Self-Attention for Long Document Understanding (Qiu et al., Findings 2020)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2020.findings-emnlp.232.pdf
- Code
- xptree/BlockBERT
- Data
- HotpotQA, NewsQA, SearchQA, TriviaQA