pFedGPT: Hierarchically Optimizing LoRA Aggregation Weights for Personalized Federated GPT Models

Zhanming Shen, Tianqi Xu, Hao Wang, Jian Li, Miao Pan


Abstract
Federated finetuning of Large Language Models (LLMs) using Low-Rank Adaptation (LoRA) offers computational efficiency and preserves data privacy. However, applying LoRA in federated settings faces significant challenges: standard approaches struggle with data heterogeneity, and existing personalization techniques fail to precisely adapt shared global knowledge to individual client needs. To address these issues, we propose pFedGPT, a framework that leverages Hierarchical Bayesian Optimization (HBO) for fine-grained, personalized LoRA aggregation. pFedGPT intelligently partitions LoRA parameters based on model structure and client information, then employs HBO to hierarchically search for optimal, module-specific weights. This enables a nuanced integration of the downloaded global LoRA state with each client’s local model, precisely capturing client-specific requirements. To manage the optimization cost inherent in HBO, pFedGPT incorporates efficient multi-fidelity evaluations and a curriculum learning strategy. Extensive experiments demonstrate that pFedGPT achieves state-of-the-art (SOTA) performance on personalized FL benchmarks, showcasing robustness and scalability while introducing only minimal (approx. 4%) additional optimization overhead. Our results also underscore the limitations of traditional FL methods for LoRA-based LLM personalization, highlighting the need for tailored approaches like pFedGPT.
Anthology ID:
2025.emnlp-main.239
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4766–4778
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.239/
DOI:
Bibkey:
Cite (ACL):
Zhanming Shen, Tianqi Xu, Hao Wang, Jian Li, and Miao Pan. 2025. pFedGPT: Hierarchically Optimizing LoRA Aggregation Weights for Personalized Federated GPT Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 4766–4778, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
pFedGPT: Hierarchically Optimizing LoRA Aggregation Weights for Personalized Federated GPT Models (Shen et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.239.pdf
Checklist:
 2025.emnlp-main.239.checklist.pdf