Performance-Efficiency Trade-Offs in Adapting Language Models to Text Classification Tasks

Laura Aina, Nikos Voskarides, Roi Blanco


Abstract
Pre-trained language models (LMs) obtain state-of-the-art performance when adapted to text classification tasks. However, when using such models in real world applications, efficiency considerations are paramount. In this paper, we study how different training procedures that adapt LMs to text classification perform, as we vary model and train set size. More specifically, we compare standard fine-tuning, prompting, and knowledge distillation (KD) when the teacher was trained with either fine-tuning or prompting. Our findings suggest that even though fine-tuning and prompting work well to train large LMs on large train sets, there are more efficient alternatives that can reduce compute or data cost. Interestingly, we find that prompting combined with KD can reduce compute and data cost at the same time.
Anthology ID:
2022.aacl-short.31
Volume:
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Month:
November
Year:
2022
Address:
Online only
Editors:
Yulan He, Heng Ji, Sujian Li, Yang Liu, Chua-Hui Chang
Venues:
AACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
244–253
Language:
URL:
https://aclanthology.org/2022.aacl-short.31
DOI:
Bibkey:
Cite (ACL):
Laura Aina, Nikos Voskarides, and Roi Blanco. 2022. Performance-Efficiency Trade-Offs in Adapting Language Models to Text Classification Tasks. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 244–253, Online only. Association for Computational Linguistics.
Cite (Informal):
Performance-Efficiency Trade-Offs in Adapting Language Models to Text Classification Tasks (Aina et al., AACL-IJCNLP 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/2022.aacl-short.31.pdf