Generation-Distillation for Efficient Natural Language Understanding in Low-Data Settings

Luke Melas-Kyriazi, George Han, Celine Liang


Abstract
Over the past year, the emergence of transfer learning with large-scale language models (LM) has led to dramatic performance improvements across a broad range of natural language understanding tasks. However, the size and memory footprint of these large LMs often makes them difficult to deploy in many scenarios (e.g. on mobile phones). Recent research points to knowledge distillation as a potential solution, showing that when training data for a given task is abundant, it is possible to distill a large (teacher) LM into a small task-specific (student) network with minimal loss of performance. However, when such data is scarce, there remains a significant performance gap between large pretrained LMs and smaller task-specific models, even when training via distillation. In this paper, we bridge this gap with a novel training approach, called generation-distillation, that leverages large finetuned LMs in two ways: (1) to generate new (unlabeled) training examples, and (2) to distill their knowledge into a small network using these examples. Across three low-resource text classification datsets, we achieve comparable performance to BERT while using 300 times fewer parameters, and we outperform prior approaches to distillation for text classification while using 3 times fewer parameters.
Anthology ID:
D19-6114
Volume:
Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Colin Cherry, Greg Durrett, George Foster, Reza Haffari, Shahram Khadivi, Nanyun Peng, Xiang Ren, Swabha Swayamdipta
Venue:
WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
124–131
Language:
URL:
https://aclanthology.org/D19-6114
DOI:
10.18653/v1/D19-6114
Bibkey:
Cite (ACL):
Luke Melas-Kyriazi, George Han, and Celine Liang. 2019. Generation-Distillation for Efficient Natural Language Understanding in Low-Data Settings. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 124–131, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Generation-Distillation for Efficient Natural Language Understanding in Low-Data Settings (Melas-Kyriazi et al., 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/D19-6114.pdf
Data
DBpedia