Abstract
In recent years, large pre-trained models have demonstrated state-of-the-art performance in many of NLP tasks. However, the deployment of these models on devices with limited resources is challenging due to the models’ large computational consumption and memory requirements. Moreover, the need for a considerable amount of labeled training data also hinders real-world deployment scenarios. Model distillation has shown promising results for reducing model size, computational load and data efficiency. In this paper we test the boundaries of BERT model distillation in terms of model compression, inference efficiency and data scarcity. We show that classification tasks that require the capturing of general lexical semantics can be successfully distilled by very simple and efficient models and require relatively small amount of labeled training data. We also show that the distillation of large pre-trained models is more effective in real-life scenarios where limited amounts of labeled training are available.- Anthology ID:
- 2020.sustainlp-1.5
- Volume:
- Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Venue:
- sustainlp
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 35–40
- Language:
- URL:
- https://aclanthology.org/2020.sustainlp-1.5
- DOI:
- 10.18653/v1/2020.sustainlp-1.5
- Cite (ACL):
- Moshe Wasserblat, Oren Pereg, and Peter Izsak. 2020. Exploring the Boundaries of Low-Resource BERT Distillation. In Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing, pages 35–40, Online. Association for Computational Linguistics.
- Cite (Informal):
- Exploring the Boundaries of Low-Resource BERT Distillation (Wasserblat et al., sustainlp 2020)
- PDF:
- https://preview.aclanthology.org/paclic-22-ingestion/2020.sustainlp-1.5.pdf
- Data
- AG News, CARER, CoLA, GLUE, IMDb Movie Reviews, SST