Towards Federated Low-Rank Adaptation of Language Models with Rank Heterogeneity

Yuji Byun, Jaeho Lee


Abstract
Low-rank adaptation (LoRA) offers an efficient alternative to full-weight adaptation in federated fine-tuning of language models, significantly reducing computational costs. By adjusting ranks for each client, federated LoRA enables flexible resource allocation. However, we observe that heterogeneous ranks among clients lead to unstable performance. Our analysis attributes this instability to the conventional zero-padding aggregation strategy, which dilutes information from high-rank clients during model aggregation. To address this issue, we propose a replication-based padding strategy that better retains valuable information from clients with high-quality data. Empirically, this approach accelerates convergence and enhances the global model’s predictive performance.
Anthology ID:
2025.naacl-short.30
Volume:
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
356–362
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-short.30/
DOI:
Bibkey:
Cite (ACL):
Yuji Byun and Jaeho Lee. 2025. Towards Federated Low-Rank Adaptation of Language Models with Rank Heterogeneity. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers), pages 356–362, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Towards Federated Low-Rank Adaptation of Language Models with Rank Heterogeneity (Byun & Lee, NAACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-short.30.pdf