Hard Gate Knowledge Distillation - Leverage Calibration for Robust and Reliable Language Model

Dongkyu Lee, Zhiliang Tian, Yingxiu Zhao, Ka Chun Cheung, Nevin Zhang


Abstract
In knowledge distillation, a student model is trained with supervisions from both knowledge from a teacher and observations drawn from a training data distribution. Knowledge of a teacher is considered a subject that holds inter-class relations which send a meaningful supervision to a student; hence, much effort has been put to find such knowledge to be distilled. In this paper, we explore a question that has been given little attention: “when to distill such knowledge.” The question is answered in our work with the concept of model calibration; we view a teacher model not only as a source of knowledge but also as a gauge to detect miscalibration of a student. This simple and yet novel view leads to a hard gate knowledge distillation scheme that switches between learning from a teacher model and training data. We verify the gating mechanism in the context of natural language generation at both the token-level and the sentence-level. Empirical comparisons with strong baselines show that hard gate knowledge distillation not only improves model generalization, but also significantly lowers model calibration error.
Anthology ID:
2022.emnlp-main.665
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9793–9803
Language:
URL:
https://aclanthology.org/2022.emnlp-main.665
DOI:
10.18653/v1/2022.emnlp-main.665
Bibkey:
Cite (ACL):
Dongkyu Lee, Zhiliang Tian, Yingxiu Zhao, Ka Chun Cheung, and Nevin Zhang. 2022. Hard Gate Knowledge Distillation - Leverage Calibration for Robust and Reliable Language Model. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9793–9803, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Hard Gate Knowledge Distillation - Leverage Calibration for Robust and Reliable Language Model (Lee et al., EMNLP 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/2022.emnlp-main.665.pdf