Abstract
Large Language Models (LLMs) have emerged as the dominant paradigm in Natural Language Processing owing to their remarkable performance across various target tasks. However, naively fine-tuning them for specific downstream tasks often requires updating a vast number of parameters, resulting in high computational costs and overfitting when training data is limited. In this paper, we propose a novel approach, called *Stochastic Tuning*, that addresses these challenges by selectively updating a small subset of parameters in each step of the tuning process. Our approach is characterized by its customization of updates based on task-specific partial gradients with respect to stochastic sub-networks. The advantage of Stochastic Tuning over existing solutions lies in its ability to consider both parameter weights as well as forward values which guarantees a context-sensitive fine-tuning. Our experiments demonstrate that Stochastic Tuning outperforms existing lightweight fine-tuning methods, improving average performance by over two points on RoBERTa across several tasks in the GLUE benchmark while updating merely **0.08**% of the model’s parameters. The code for our implementation can be found at https://github.com/m-Tajari/StocTuning_LLMs.- Anthology ID:
- 2024.findings-emnlp.1002
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2024
- Month:
- November
- Year:
- 2024
- Address:
- Miami, Florida, USA
- Editors:
- Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 17195–17202
- Language:
- URL:
- https://aclanthology.org/2024.findings-emnlp.1002
- DOI:
- 10.18653/v1/2024.findings-emnlp.1002
- Cite (ACL):
- Mohammad Akbar-Tajari and Mohammad Taher Pilehvar. 2024. Stochastic Fine-Tuning of Language Models Using Masked Gradients. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 17195–17202, Miami, Florida, USA. Association for Computational Linguistics.
- Cite (Informal):
- Stochastic Fine-Tuning of Language Models Using Masked Gradients (Akbar-Tajari & Pilehvar, Findings 2024)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2024.findings-emnlp.1002.pdf