@inproceedings{wang-etal-2025-continual,
    title = "Continual Gradient Low-Rank Projection Fine-Tuning for {LLM}s",
    author = "Wang, Chenxu  and
      Lyu, Yilin  and
      Sun, Zicheng  and
      Jing, Liping",
    editor = "Che, Wanxiang  and
      Nabende, Joyce  and
      Shutova, Ekaterina  and
      Pilehvar, Mohammad Taher",
    booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.acl-long.721/",
    doi = "10.18653/v1/2025.acl-long.721",
    pages = "14815--14829",
    ISBN = "979-8-89176-251-0",
    abstract = "Continual fine-tuning of Large Language Models (LLMs) is hampered by the trade-off between efficiency and expressiveness. Low-Rank Adaptation (LoRA) offers efficiency but constrains the model{'}s ability to learn new tasks and transfer knowledge due to its low-rank nature and reliance on explicit parameter constraints. We propose GORP ($\underline{\textbf{G}}$radient L$\underline{\textbf{O}}$w $\underline{\textbf{R}}$ank $\underline{\textbf{P}}$rojection) for Continual Learning, a novel training strategy that overcomes these limitations by synergistically combining full and low-rank parameters and jointly updating within a unified low-rank gradient subspace. GORP expands the optimization space while preserving efficiency and mitigating catastrophic forgetting. Extensive experiments on continual learning benchmarks demonstrate GORP{'}s superior performance compared to existing state-of-the-art approaches. Code is available at https://github.com/Wcxwcxw/GORP."
}Markdown (Informal)
[Continual Gradient Low-Rank Projection Fine-Tuning for LLMs](https://preview.aclanthology.org/ingest-emnlp/2025.acl-long.721/) (Wang et al., ACL 2025)
ACL
- Chenxu Wang, Yilin Lyu, Zicheng Sun, and Liping Jing. 2025. Continual Gradient Low-Rank Projection Fine-Tuning for LLMs. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14815–14829, Vienna, Austria. Association for Computational Linguistics.