A Study of Parameter Efficient Fine-tuning by Learning to Efficiently Fine-Tune

Taha Ceritli, Savas Ozkan, Jeongwon Min, Eunchung Noh, Cho Jung Min, Mete Ozay


Abstract
The growing size of large language models (LLMs) requires parameter-efficient fine-tuning (PEFT) methods for their adaptation to new tasks. Existing methods, such as Low-Rank Adaptation (LoRA), typically involve model adaptation by training the PEFT parameters. One open problem required to be solved to effectively employ these methods is the identification of PEFT parameters. More precisely, related works identify PEFT parameters by projecting high dimensional parameters of LLMs onto low dimensional parameter manifolds with predefined projections, or identifying PEFT parameters as projections themselves. To study this problem, we propose a new approach called Learning to Efficiently Fine-tune (LEFT) where we aim to learn spaces of PEFT parameters from data. In order to learn how to generate the PEFT parameters on a learned parameter space while fine-tuning the LLMs, we propose the Parameter Generation (PG) method. In the experimental analyses, we examine the effectiveness of our solutions exploring accuracy of fine-tuned LLMs and characteristics of PEFT parameters on benchmark GLUE tasks.
Anthology ID:
2024.findings-emnlp.929
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15819–15836
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2024.findings-emnlp.929/
DOI:
10.18653/v1/2024.findings-emnlp.929
Bibkey:
Cite (ACL):
Taha Ceritli, Savas Ozkan, Jeongwon Min, Eunchung Noh, Cho Jung Min, and Mete Ozay. 2024. A Study of Parameter Efficient Fine-tuning by Learning to Efficiently Fine-Tune. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 15819–15836, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
A Study of Parameter Efficient Fine-tuning by Learning to Efficiently Fine-Tune (Ceritli et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2024.findings-emnlp.929.pdf