Revisiting Pruning vs Quantization for Small Language Models

Zihan Zhou, Simon Kurz, Zhixue Zhao


Abstract
Deploying language models on resource-constrained devices, such as mobile phones, wearables, and on-device AI assistants, demands compact, efficient models without sacrificing performance. Compressing Small Language Models (SLMs) is particularly suited for these scenarios, yet their compression dynamics remain underexplored compared to Large Language Models (LLMs). We systematically evaluate leading post-training pruning (SparseGPT, Wanda) and quantization (GPTQ, AWQ) methods across six SLMs from 0.5 to 3.8B, seven languages, and seven downstream tasks. Our results show that quantization consistently outperforms pruning in preserving model fidelity, multilingual perplexity, and reasoning accuracy. However, quantization’s advantages diminish on complex knowledge and reasoning tasks like OpenBookQA, highlighting a disconnect between compression fidelity and downstream task performance. Notably, trends observed in LLMs (e.g., Wanda’s competitive performance to SparseGPT) do not generalize to SLMs. For practitioners, we recommend prioritizing quantization (particularly AWQ) for SLM compression and caution against relying on a single metric.
Anthology ID:
2025.findings-emnlp.645
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12055–12070
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.645/
DOI:
10.18653/v1/2025.findings-emnlp.645
Bibkey:
Cite (ACL):
Zihan Zhou, Simon Kurz, and Zhixue Zhao. 2025. Revisiting Pruning vs Quantization for Small Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 12055–12070, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Revisiting Pruning vs Quantization for Small Language Models (Zhou et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.645.pdf
Checklist:
 2025.findings-emnlp.645.checklist.pdf