From Teacher to Student: Tracking Memorization Through Model Distillation

Simardeep Singh


Abstract
Large language models (LLMs) are known to memorize parts of their training data, raising important concerns around privacy and security. While previous research has focused on studying memorization in pre-trained models, much less is known about how knowledge distillation (KD) affects memorization.In this study, we explore how different KD methods influence the memorization of fine-tuned task data when a large teacher model is distilled into smaller student variants.This study demonstrates that distilling a larger teacher model, fine-tuned on a dataset, into a smaller variant not only lowers computational costs and model size but also significantly reduces the memorization risks compared to standard fine-tuning approaches.
Anthology ID:
2025.l2m2-1.6
Volume:
Proceedings of the First Workshop on Large Language Model Memorization (L2M2)
Month:
August
Year:
2025
Address:
Vienna, Austria
Editors:
Robin Jia, Eric Wallace, Yangsibo Huang, Tiago Pimentel, Pratyush Maini, Verna Dankers, Johnny Wei, Pietro Lesci
Venues:
L2M2 | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
78–82
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.l2m2-1.6/
DOI:
Bibkey:
Cite (ACL):
Simardeep Singh. 2025. From Teacher to Student: Tracking Memorization Through Model Distillation. In Proceedings of the First Workshop on Large Language Model Memorization (L2M2), pages 78–82, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
From Teacher to Student: Tracking Memorization Through Model Distillation (Singh, L2M2 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.l2m2-1.6.pdf