Lifelong Language Knowledge Distillation

Yung-Sung Chuang, Shang-Yu Su, Yun-Nung Chen


Abstract
It is challenging to perform lifelong language learning (LLL) on a stream of different tasks without any performance degradation comparing to the multi-task counterparts. To address this issue, we present Lifelong Language Knowledge Distillation (L2KD), a simple but efficient method that can be easily applied to existing LLL architectures in order to mitigate the degradation. Specifically, when the LLL model is trained on a new task, we assign a teacher model to first learn the new task, and pass the knowledge to the LLL model via knowledge distillation. Therefore, the LLL model can better adapt to the new task while keeping the previously learned knowledge. Experiments show that the proposed L2KD consistently improves previous state-of-the-art models, and the degradation comparing to multi-task models in LLL tasks is well mitigated for both sequence generation and text classification tasks.
Anthology ID:
2020.emnlp-main.233
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2914–2924
Language:
URL:
https://aclanthology.org/2020.emnlp-main.233
DOI:
10.18653/v1/2020.emnlp-main.233
Bibkey:
Cite (ACL):
Yung-Sung Chuang, Shang-Yu Su, and Yun-Nung Chen. 2020. Lifelong Language Knowledge Distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2914–2924, Online. Association for Computational Linguistics.
Cite (Informal):
Lifelong Language Knowledge Distillation (Chuang et al., EMNLP 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2020.emnlp-main.233.pdf
Video:
 https://slideslive.com/38938863
Code
 voidism/L2KD
Data
WikiSQLdecaNLP