PreCog: Exploring the Relation between Memorization and Performance in Pre-trained Language Models

Leonardo Ranaldi, Elena Sofia Ruzzetti, Fabio Massimo Zanzotto


Abstract
Large Language Models (LLMs) are impressive machines with the ability to memorize, possibly generalized learning examples. We present here a small, focused contribution to the analysis of the interplay between memorization and performance of BERT in downstream tasks. We propose PreCog, a measure for evaluating memorization from pre-training, and we analyze its correlation with the BERT’s performance. Our experiments show that highly memorized examples are better classified, suggesting memorization is an essential key to success for BERT.
Anthology ID:
2023.ranlp-1.103
Volume:
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing
Month:
September
Year:
2023
Address:
Varna, Bulgaria
Editors:
Ruslan Mitkov, Galia Angelova
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd., Shoumen, Bulgaria
Note:
Pages:
961–967
Language:
URL:
https://aclanthology.org/2023.ranlp-1.103
DOI:
Bibkey:
Cite (ACL):
Leonardo Ranaldi, Elena Sofia Ruzzetti, and Fabio Massimo Zanzotto. 2023. PreCog: Exploring the Relation between Memorization and Performance in Pre-trained Language Models. In Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing, pages 961–967, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
Cite (Informal):
PreCog: Exploring the Relation between Memorization and Performance in Pre-trained Language Models (Ranaldi et al., RANLP 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2023.ranlp-1.103.pdf