Abstract
Recent advances in distilling pretrained language models have discovered that, besides the expressiveness of knowledge, the student-friendliness should be taken into consideration to realize a truly knowledgeable teacher. Based on a pilot study, we find that over-parameterized teachers can produce expressive yet student-unfriendly knowledge and are thus limited in overall knowledgeableness. To remove the parameters that result in student-unfriendliness, we propose a sparse teacher trick under the guidance of an overall knowledgeable score for each teacher parameter. The knowledgeable score is essentially an interpolation of the expressiveness and student-friendliness scores. The aim is to ensure that the expressive parameters are retained while the student-unfriendly ones are removed. Extensive experiments on the GLUE benchmark show that the proposed sparse teachers can be dense with knowledge and lead to students with compelling performance in comparison with a series of competitive baselines.- Anthology ID:
- 2022.emnlp-main.258
- Volume:
- Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates
- Editors:
- Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3904–3915
- Language:
- URL:
- https://aclanthology.org/2022.emnlp-main.258
- DOI:
- 10.18653/v1/2022.emnlp-main.258
- Cite (ACL):
- Yi Yang, Chen Zhang, and Dawei Song. 2022. Sparse Teachers Can Be Dense with Knowledge. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3904–3915, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Cite (Informal):
- Sparse Teachers Can Be Dense with Knowledge (Yang et al., EMNLP 2022)
- PDF:
- https://preview.aclanthology.org/ingest-acl-2023-videos/2022.emnlp-main.258.pdf