Self-Knowledge Distillation in Natural Language Processing

Sangchul Hahn, Heeyoul Choi


Abstract
Since deep learning became a key player in natural language processing (NLP), many deep learning models have been showing remarkable performances in a variety of NLP tasks. Such high performance can be explained by efficient knowledge representation of deep learning models. Knowledge distillation from pretrained deep networks suggests that we can use more information from the soft target probability to train other neural networks. In this paper, we propose a self-knowledge distillation method, based on the soft target probabilities of the training model itself, where multimode information is distilled from the word embedding space right below the softmax layer. Due to the time complexity, our method approximates the soft target probabilities. In experiments, we applied the proposed method to two different and fundamental NLP tasks: language model and neural machine translation. The experiment results show that our proposed method improves performance on the tasks.
Anthology ID:
R19-1050
Volume:
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)
Month:
September
Year:
2019
Address:
Varna, Bulgaria
Editors:
Ruslan Mitkov, Galia Angelova
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
423–430
Language:
URL:
https://aclanthology.org/R19-1050
DOI:
10.26615/978-954-452-056-4_050
Bibkey:
Cite (ACL):
Sangchul Hahn and Heeyoul Choi. 2019. Self-Knowledge Distillation in Natural Language Processing. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019), pages 423–430, Varna, Bulgaria. INCOMA Ltd..
Cite (Informal):
Self-Knowledge Distillation in Natural Language Processing (Hahn & Choi, RANLP 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/R19-1050.pdf