Enhancing Parameter-efficient Fine-tuning with Simple Calibration Based on Stable Rank
Peiyu Liu, Ze-Feng Gao, Xiao Zhang, Wayne Xin Zhao, Ji-Rong Wen
Abstract
Lightweight fine-tuning is widely used as an important technique for efficiently adapting pre-trained language models (PLM) to downstream tasks. Despite the reduction in trainable parameters, existing lightweight fine-tuning methods are found to be effective in low-resource settings but often fail in high-resource settings, leading to unreliable outcomes. This limitation can be attributed to inflexible strategies: they identify the parameters of the model to be trained before fine-tuning and remain unchanged without taking into account the inherent variance of generalization ability in model components (i.e., feed-forward, attention layers) and potential changes during the fine-tuning process. In this paper, we introduce a simple but effective calibration for lightweight fine-tuning PLMs based on the matrix’s stable rank according to both model components and the training process. We proposed both theoretical analyses and experimental verification for the proposed calibration strategy. Considering efficiency, we further propose time-aware and structure-aware strategies to determine the most crucial time to commence the fine-tuning procedure and selectively apply parameter matrices for lightweight fine-tuning, respectively. Extensive experiments demonstrate the superiority of our proposed fine-tuning approach (average improvement 3.1 for GLUE score compared to lightweight fine-tuning method).- Anthology ID:
- 2024.lrec-main.534
- Volume:
- Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
- Month:
- May
- Year:
- 2024
- Address:
- Torino, Italia
- Editors:
- Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
- Venues:
- LREC | COLING
- SIG:
- Publisher:
- ELRA and ICCL
- Note:
- Pages:
- 6024–6035
- Language:
- URL:
- https://aclanthology.org/2024.lrec-main.534
- DOI:
- Cite (ACL):
- Peiyu Liu, Ze-Feng Gao, Xiao Zhang, Wayne Xin Zhao, and Ji-Rong Wen. 2024. Enhancing Parameter-efficient Fine-tuning with Simple Calibration Based on Stable Rank. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 6024–6035, Torino, Italia. ELRA and ICCL.
- Cite (Informal):
- Enhancing Parameter-efficient Fine-tuning with Simple Calibration Based on Stable Rank (Liu et al., LREC-COLING 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2024.lrec-main.534.pdf