Abstract
Fine-tuning all parameters of large language models (LLMs) necessitates substantial computational power and extended time. Latest advancements in parameter-efficient fine-tuning (PEFT) techniques, such as Adapter tuning and LoRA, allow for adjustments to only a minor fraction of the parameters of these LLMs. Concurrently, it has been noted that the issue of over-smoothing diminishes the effectiveness of these Transformer-based LLMs, resulting in suboptimal performances in downstream tasks. In this paper, we present SIBO, which is a SImple BOoster to enhance PEFT, by injecting an initial residual. SIBO is straightforward and readily extensible to a range of state-of-the-art PEFT techniques to alleviate over-smoothing and enhance performance. Extensive experiments on 22 benchmark datasets demonstrate that SIBO significantly enhances the performance of various strong baselines, achieving up to 15.7% and 23.5% improvement over existing PEFT methods on the arithmetic and commonsense reasoning tasks, respectively.- Anthology ID:
- 2024.findings-acl.72
- Volume:
- Findings of the Association for Computational Linguistics ACL 2024
- Month:
- August
- Year:
- 2024
- Address:
- Bangkok, Thailand and virtual meeting
- Editors:
- Lun-Wei Ku, Andre Martins, Vivek Srikumar
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1241–1257
- Language:
- URL:
- https://aclanthology.org/2024.findings-acl.72
- DOI:
- Cite (ACL):
- Zhihao Wen, Jie Zhang, and Yuan Fang. 2024. SIBO: A Simple Booster for Parameter-Efficient Fine-Tuning. In Findings of the Association for Computational Linguistics ACL 2024, pages 1241–1257, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
- Cite (Informal):
- SIBO: A Simple Booster for Parameter-Efficient Fine-Tuning (Wen et al., Findings 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2024.findings-acl.72.pdf