Adaptive LoRA Merge with Parameter Pruning for Low-Resource Generation

Ryota Miyano, Yuki Arase


Abstract
This study proposes a simple yet effective LoRA merge method to achieve LLM adaptation for low-resource language generation tasks. The LoRA merge technique, which integrates multiple LoRA modules trained on different tasks, has gained attention as an effective and efficient approach for adapting LLMs to target tasks. However, previous methods are limited in adaptability as they keep the LoRA parameters frozen. Additionally, the low-resource problem has been out of their scope. We propose a LoRA merge method that updates and prunes LoRA parameters through fine-tuning with minimal target task data, which allows finer-grained adjustments of LoRA parameters and enhancement of task adaptability. Extensive experiments have been conducted taking summarization as a benchmark task. Our datasets cover various domains and multiple languages of English and Japanese. The results confirm that the proposed method achieves significant and consistent improvements in task adaptability over the previous methods.
Anthology ID:
2025.findings-acl.990
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
19353–19366
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-acl.990/
DOI:
Bibkey:
Cite (ACL):
Ryota Miyano and Yuki Arase. 2025. Adaptive LoRA Merge with Parameter Pruning for Low-Resource Generation. In Findings of the Association for Computational Linguistics: ACL 2025, pages 19353–19366, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Adaptive LoRA Merge with Parameter Pruning for Low-Resource Generation (Miyano & Arase, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-acl.990.pdf