Decentralized Low-Rank Fine-Tuning of Large Language Models

Sajjad Ghiasvand, Mahnoosh Alizadeh, Ramtin Pedarsani


Abstract
While parameter-efficient fine-tuning (PEFT) techniques like Low-Rank Adaptation (LoRA) offer computationally efficient adaptations of Large Language Models (LLMs), their practical deployment often assumes centralized data and training environments. However, real-world scenarios frequently involve distributed, privacy-sensitive datasets that require decentralized solutions. Federated learning (FL) addresses data privacy by coordinating model updates across clients, but it is typically based on centralized aggregation through a parameter server, which can introduce bottlenecks and communication constraints. Decentralized learning, in contrast, eliminates this dependency by enabling direct collaboration between clients, improving scalability and efficiency in distributed environments. Despite its advantages, decentralized LLM fine-tuning remains underexplored. In this work, we propose Dec-LoRA, an algorithm for decentralized fine-tuning of LLMs based on LoRA. Through extensive experiments on BERT and LLaMA-2 models, we show that Dec-LoRA maintains performance comparable to centralized LoRA across various conditions, including data heterogeneity and quantization constraints. This highlights its potential for scalable LLM fine-tuning in decentralized environments.
Anthology ID:
2025.realm-1.24
Volume:
Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Ehsan Kamalloo, Nicolas Gontier, Xing Han Lu, Nouha Dziri, Shikhar Murty, Alexandre Lacoste
Venues:
REALM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
334–345
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.realm-1.24/
DOI:
Bibkey:
Cite (ACL):
Sajjad Ghiasvand, Mahnoosh Alizadeh, and Ramtin Pedarsani. 2025. Decentralized Low-Rank Fine-Tuning of Large Language Models. In Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025), pages 334–345, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Decentralized Low-Rank Fine-Tuning of Large Language Models (Ghiasvand et al., REALM 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.realm-1.24.pdf