Continual Gradient Low-Rank Projection Fine-Tuning for LLMs

Chenxu Wang, Yilin Lyu, Zicheng Sun, Liping Jing


Abstract
Continual fine-tuning of Large Language Models (LLMs) is hampered by the trade-off between efficiency and expressiveness. Low-Rank Adaptation (LoRA) offers efficiency but constrains the model’s ability to learn new tasks and transfer knowledge due to its low-rank nature and reliance on explicit parameter constraints. We propose GORP ( ̲Gradient L ̲Ow  ̲Rank  ̲Projection) for Continual Learning, a novel training strategy that overcomes these limitations by synergistically combining full and low-rank parameters and jointly updating within a unified low-rank gradient subspace. GORP expands the optimization space while preserving efficiency and mitigating catastrophic forgetting. Extensive experiments on continual learning benchmarks demonstrate GORP’s superior performance compared to existing state-of-the-art approaches. Code is available at https://github.com/Wcxwcxw/GORP.
Anthology ID:
2025.acl-long.721
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14815–14829
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.721/
DOI:
Bibkey:
Cite (ACL):
Chenxu Wang, Yilin Lyu, Zicheng Sun, and Liping Jing. 2025. Continual Gradient Low-Rank Projection Fine-Tuning for LLMs. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14815–14829, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Continual Gradient Low-Rank Projection Fine-Tuning for LLMs (Wang et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.721.pdf