On the Generalization vs Fidelity Paradox in Knowledge Distillation

Suhas Kamasetty Ramesh, Ayan Sengupta, Tanmoy Chakraborty


Abstract
Knowledge distillation (KD) is a key technique for compressing large language models into smaller ones while preserving performance. Despite the recent traction of KD research, its effectiveness for smaller language models (LMs) and the mechanisms driving knowledge transfer remain underexplored. In this work, we present the first large-scale empirical and statistical analysis of KD across models ranging from 0.5B to 7B parameters on 14 complex reasoning tasks in a zero-shot setting. Our findings reveal that KD can improve the average performance of smaller models by up to 10%, with a peak task specific gain of 22%, while providing only marginal benefits (∼ 1.3%) for larger models. Surprisingly, teacher performance has a minimal impact on student outcomes, while teacher task expertise impacts KD effectiveness. A correlation study indicates that smaller LMs benefit more from KD, whereas larger LMs show diminished gains. Additionally, we uncover a misalignment between improvements in student performance and reasoning fidelity, suggesting that while KD enhances accuracy, it does not always maintain the structured decision-making processes of the teacher. Our ablation study further highlights the importance of teacher signals and logit smoothing in influencing students’ performance after distillation. Overall, our study offers a comprehensive empirical and statistical assessment of KD, highlighting both its benefits and trade-offs when distilling knowledge from larger to smaller LMs.
Anthology ID:
2025.findings-acl.923
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venues:
Findings | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17930–17951
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.923/
DOI:
Bibkey:
Cite (ACL):
Suhas Kamasetty Ramesh, Ayan Sengupta, and Tanmoy Chakraborty. 2025. On the Generalization vs Fidelity Paradox in Knowledge Distillation. In Findings of the Association for Computational Linguistics: ACL 2025, pages 17930–17951, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
On the Generalization vs Fidelity Paradox in Knowledge Distillation (Ramesh et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.923.pdf