QPruner: Probabilistic Decision Quantization for Structured Pruning in Large Language Models

Changhai Zhou, Yuhua Zhou, Yibin Wang, Shijie Han, Qian Qiao, Hongguang Li


Abstract
The rise of large language models (LLMs) has significantly advanced various natural language processing (NLP) tasks. However, the resource demands of these models pose substantial challenges. Structured pruning is an effective approach to reducing model size, but it often results in significant accuracy degradation, necessitating parameter updates to adapt. Unfortunately, such fine-tuning requires substantial memory, which limits its applicability. To address these challenges, we introduce quantization into the structured pruning framework to reduce memory consumption during both fine-tuning and inference. However, the combined errors from pruning and quantization increase the difficulty of fine-tuning, requiring a more refined quantization scheme. To this end, we propose QPruner, a novel framework that employs structured pruning to reduce model size, followed by a layer-wise mixed-precision quantization scheme. Quantization precisions are assigned to each layer based on their importance to the target task, and Bayesian optimization is employed to refine precision allocation strategies, ensuring a balance between model accuracy and memory efficiency. Extensive experiments on benchmark datasets demonstrate that QPruner significantly outperforms existing methods in memory savings while maintaining or improving model performance.
Anthology ID:
2025.findings-naacl.240
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4276–4286
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.240/
DOI:
Bibkey:
Cite (ACL):
Changhai Zhou, Yuhua Zhou, Yibin Wang, Shijie Han, Qian Qiao, and Hongguang Li. 2025. QPruner: Probabilistic Decision Quantization for Structured Pruning in Large Language Models. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 4276–4286, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
QPruner: Probabilistic Decision Quantization for Structured Pruning in Large Language Models (Zhou et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.240.pdf