Low-Rank Interconnected Adaptation across Layers

Yibo Zhong, Jinman Zhao, Yao Zhou


Abstract
Low-rank adaptation (LoRA) is a widely used parameter-efficient fine-tuning (PEFT) method that learns weight updates 𝛥 W = AB for pretrained weights W through low-rank adapters A and B. While LoRA ensures hardware efficiency, its low-rank weight updates limit adaptation performance. In this paper, we propose low-rank interconnected adaptation across layers (Lily), a novel PEFT method that introduces an interconnected framework with locally shared A and globally shared B experts. This structure eliminates redundant per-layer AB pairs, enabling higher-rank 𝛥 W with equal or fewer parameters. To enhance expressiveness, we use data-dependent routers to determine A-B interconnections, preventing B experts from converging to the same behavior and improving representational power across domains. Experiments across modalities, architectures, and model sizes demonstrate Lily’s superior performance and efficiency.
Anthology ID:
2025.findings-acl.874
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17005–17029
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-acl.874/
DOI:
Bibkey:
Cite (ACL):
Yibo Zhong, Jinman Zhao, and Yao Zhou. 2025. Low-Rank Interconnected Adaptation across Layers. In Findings of the Association for Computational Linguistics: ACL 2025, pages 17005–17029, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Low-Rank Interconnected Adaptation across Layers (Zhong et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-acl.874.pdf