One-for-All Pruning: A Universal Model for Customized Compression of Large Language Models

Rongguang Ye, Ming Tang


Abstract
Existing pruning methods for large language models (LLMs) focus on achieving high compression rates while maintaining model performance. Although these methods have demonstrated satisfactory performance in handling a single user’s compression request, their processing time increases linearly with the number of requests, making them inefficient for real-world scenarios with multiple simultaneous requests. To address this limitation, we propose a Univeral Model for Customized Compression (UniCuCo) for LLMs, which introduces a StratNet that learns to map arbitrary requests to their optimal pruning strategy. The challenge in training StratNet lies in the high computational cost of evaluating pruning strategies and the non-differentiable nature of the pruning process, which hinders gradient backpropagation for StratNet updates. To overcome these challenges, we leverage a Gaussian process to approximate the evaluation process. Since the gradient of the Gaussian process is computable, we can use it to approximate the gradient of the non-differentiable pruning process, thereby enabling StratNet updates. Experimental results show that UniCuCo is 28 times faster than baselines in processing 64 requests, while maintaining comparable accuracy to baselines.
Anthology ID:
2025.findings-acl.132
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2591–2604
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.132/
DOI:
Bibkey:
Cite (ACL):
Rongguang Ye and Ming Tang. 2025. One-for-All Pruning: A Universal Model for Customized Compression of Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2025, pages 2591–2604, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
One-for-All Pruning: A Universal Model for Customized Compression of Large Language Models (Ye & Tang, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.132.pdf