A Primer in BERTology: What We Know About How BERT Works

Anna Rogers, Olga Kovaleva, Anna Rumshisky


Abstract
Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue, and approaches to compression. We then outline directions for future research.
Anthology ID:
2020.tacl-1.54
Volume:
Transactions of the Association for Computational Linguistics, Volume 8
Month:
Year:
2020
Address:
Cambridge, MA
Editors:
Mark Johnson, Brian Roark, Ani Nenkova
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
842–866
Language:
URL:
https://aclanthology.org/2020.tacl-1.54
DOI:
10.1162/tacl_a_00349
Bibkey:
Cite (ACL):
Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A Primer in BERTology: What We Know About How BERT Works. Transactions of the Association for Computational Linguistics, 8:842–866.
Cite (Informal):
A Primer in BERTology: What We Know About How BERT Works (Rogers et al., TACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/2020.tacl-1.54.pdf
Video:
 https://preview.aclanthology.org/add_acl24_videos/2020.tacl-1.54.mp4