schuBERT: Optimizing Elements of BERT

Ashish Khetan, Zohar Karnin


Abstract
Transformers have gradually become a key component for many state-of-the-art natural language representation models. A recent Transformer based model- BERTachieved state-of-the-art results on various natural language processing tasks, including GLUE, SQuAD v1.1, and SQuAD v2.0. This model however is computationally prohibitive and has a huge number of parameters. In this work we revisit the architecture choices of BERT in efforts to obtain a lighter model. We focus on reducing the number of parameters yet our methods can be applied towards other objectives such FLOPs or latency. We show that much efficient light BERT models can be obtained by reducing algorithmically chosen correct architecture design dimensions rather than reducing the number of Transformer encoder layers. In particular, our schuBERT gives 6.6% higher average accuracy on GLUE and SQuAD datasets as compared to BERT with three encoder layers while having the same number of parameters.
Anthology ID:
2020.acl-main.250
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2807–2818
Language:
URL:
https://aclanthology.org/2020.acl-main.250
DOI:
10.18653/v1/2020.acl-main.250
Bibkey:
Cite (ACL):
Ashish Khetan and Zohar Karnin. 2020. schuBERT: Optimizing Elements of BERT. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2807–2818, Online. Association for Computational Linguistics.
Cite (Informal):
schuBERT: Optimizing Elements of BERT (Khetan & Karnin, ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2020.acl-main.250.pdf
Video:
 http://slideslive.com/38929349
Data
MultiNLI