FedSpaLLM: Federated Pruning of Large Language Models

Guangji Bai, Yijiang Li, Zilinghan Li, Liang Zhao, Kibaek Kim


Abstract
Large Language Models (LLMs) achieve state-of-the-art performance but are challenging to deploy due to their high computational and storage demands. Pruning can reduce model size, yet existing methods assume public access to calibration data, which is impractical for privacy-sensitive applications. To address the challenge of pruning LLMs in privacy-preserving settings, we propose FedSpaLLM, the first federated learning framework designed specifically for pruning LLMs. FedSpaLLM enables clients to locally prune their models based on private data while accounting for system heterogeneity and maintaining communication efficiency. Our framework introduces several key innovations: (1) a novel 0-norm aggregation function that ensures only non-zero weights are averaged across clients, preserving important model parameters; (2) an adaptive mask expansion technique that meets global sparsity targets while accommodating client-specific pruning decisions; and (3) a layer sampling strategy that reduces communication overhead and personalizes the pruning process based on client resources. Extensive experiments show that FedSpaLLM improves pruning performance in diverse federated settings.
Anthology ID:
2025.naacl-long.424
Volume:
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8361–8373
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-long.424/
DOI:
Bibkey:
Cite (ACL):
Guangji Bai, Yijiang Li, Zilinghan Li, Liang Zhao, and Kibaek Kim. 2025. FedSpaLLM: Federated Pruning of Large Language Models. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 8361–8373, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
FedSpaLLM: Federated Pruning of Large Language Models (Bai et al., NAACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-long.424.pdf