SWITCH: Studying with Teacher for Knowledge Distillation of Large Language Models

Jahyun Koo, Yerin Hwang, Yongil Kim, Taegwan Kang, Hyunkyung Bae, Kyomin Jung


Abstract
Despite the success of Large Language Models (LLMs), they still face challenges related to high inference costs and memory requirements. To address these issues, Knowledge Distillation (KD) has emerged as a popular method for model compression, with the use of student-generated outputs (SGOs) as training data being particularly notable for reducing the mismatch between training and inference. However, SGOs often produce noisy and biased sequences, which can lead to misguidance from the teacher model, especially in long sequences. To mitigate these challenges, we propose SWITCH (Studying With Teacher for Knowledge Distillation), a novel approach that strategically incorporates the teacher model during the student’s sequence generation. SWITCH identifies discrepancies between the token probabilities of the teacher and student models, allowing the teacher to intervene selectively, particularly in long sequences that are more prone to teacher misguidance. Extensive experimental results across three model families and five instruction-following datasets show that SWITCH surpasses traditional KD methods, particularly excelling in the generation of long sequential data.
Anthology ID:
2025.findings-naacl.206
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3733–3746
Language:
URL:
https://preview.aclanthology.org/moar-dois/2025.findings-naacl.206/
DOI:
10.18653/v1/2025.findings-naacl.206
Bibkey:
Cite (ACL):
Jahyun Koo, Yerin Hwang, Yongil Kim, Taegwan Kang, Hyunkyung Bae, and Kyomin Jung. 2025. SWITCH: Studying with Teacher for Knowledge Distillation of Large Language Models. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 3733–3746, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
SWITCH: Studying with Teacher for Knowledge Distillation of Large Language Models (Koo et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/moar-dois/2025.findings-naacl.206.pdf