Simardeep Singh


2025

pdf bib
From Teacher to Student: Tracking Memorization Through Model Distillation
Simardeep Singh
Proceedings of the First Workshop on Large Language Model Memorization (L2M2)

Large language models (LLMs) are known to memorize parts of their training data, raising important concerns around privacy and security. While previous research has focused on studying memorization in pre-trained models, much less is known about how knowledge distillation (KD) affects memorization.In this study, we explore how different KD methods influence the memorization of fine-tuned task data when a large teacher model is distilled into smaller student variants.This study demonstrates that distilling a larger teacher model, fine-tuned on a dataset, into a smaller variant not only lowers computational costs and model size but also significantly reduces the memorization risks compared to standard fine-tuning approaches.
Search
Co-authors
    Venues
    Fix author