Does Fine-tuning a Classifier Help in Low-budget Scenarios? Not Much

Cesar Gonzalez - Gutierrez, Audi Primadhanty, Francesco Cazzaro, Ariadna Quattoni


Abstract
In recent years, the two-step approach for text classification based on pre-training plus fine-tuning has led to significant improvements in classification performance. In this paper, we study the low-budget scenario, and we ask whether it is justified to allocate the additional resources needed for fine-tuning complex models. To do so, we isolate the gains obtained from pre-training from those obtained from fine-tuning. We find out that, when the gains from pre-training are factored out, the performance attained by using complex transformer models leads to marginal improvements over simpler models. Therefore, in this scenario, utilizing simpler classifiers on top of pre-trained representations proves to be a viable alternative.
Anthology ID:
2024.insights-1.3
Volume:
Proceedings of the Fifth Workshop on Insights from Negative Results in NLP
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Shabnam Tafreshi, Arjun Akula, João Sedoc, Aleksandr Drozd, Anna Rogers, Anna Rumshisky
Venues:
insights | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17–24
Language:
URL:
https://aclanthology.org/2024.insights-1.3
DOI:
Bibkey:
Cite (ACL):
Cesar Gonzalez - Gutierrez, Audi Primadhanty, Francesco Cazzaro, and Ariadna Quattoni. 2024. Does Fine-tuning a Classifier Help in Low-budget Scenarios? Not Much. In Proceedings of the Fifth Workshop on Insights from Negative Results in NLP, pages 17–24, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Does Fine-tuning a Classifier Help in Low-budget Scenarios? Not Much (Gonzalez - Gutierrez et al., insights-WS 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-checklist/2024.insights-1.3.pdf