Distill or Annotate? Cost-Efficient Fine-Tuning of Compact Models

Junmo Kang, Wei Xu, Alan Ritter


Abstract
Fine-tuning large models is highly effective, however, inference can be expensive and produces carbon emissions. Knowledge distillation has been shown to be a practical solution to reduce inference costs, but the distillation process itself requires significant computational resources. Rather than buying or renting GPUs to fine-tune, then distill a large model, an NLP practitioner might instead choose to allocate the available budget to hire annotators and manually label additional fine-tuning data. In this paper, we investigate how to most efficiently use a fixed budget to build a compact model. Through extensive experiments on six diverse tasks, we show that distilling from T5-XXL (11B) to T5-Small (60M) is almost always a cost-efficient strategy compared to annotating more data to directly train a compact model (T5-Small). We further investigate how the optimal budget allocated towards computation varies across scenarios. We will make our code, datasets, annotation cost estimates, and baseline models available as a benchmark to support further work on cost-efficient training of compact models.
Anthology ID:
2023.acl-long.622
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11100–11119
Language:
URL:
https://aclanthology.org/2023.acl-long.622
DOI:
10.18653/v1/2023.acl-long.622
Bibkey:
Cite (ACL):
Junmo Kang, Wei Xu, and Alan Ritter. 2023. Distill or Annotate? Cost-Efficient Fine-Tuning of Compact Models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11100–11119, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Distill or Annotate? Cost-Efficient Fine-Tuning of Compact Models (Kang et al., ACL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2023.acl-long.622.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-5/2023.acl-long.622.mp4