Distribution-aware Low-bitwidth Quantization for Large Language Models

Bao Tan Duy Huynh, Takashi Tsunakawa, Masafumi Nishida


Abstract
The increasing scale and complexity of large language models (LLMs) present significant computational and memory challenges, limiting their widespread deployment. Post-training quantization (PTQ) has emerged as a key technique for mitigating these challenges without costly retraining. However, compressing models to ultra-low bitwidths (e.g., 2-3 bits) while maintaining accuracy remains a major challenge. In this study, we present a comprehensive PTQ framework that addresses this problem by compressing LLM weights through three core innovations: (1) a calibration process guided by Kullback-Leibler divergence minimization to preserve the original weight distribution, (2) a learnable codebook optimization mechanism employing noise substitution for vector quantization to enable robust gradient estimation, and (3) a layer-grouping strategy based on statistical distribution similarity to improve parameter efficiency. Experimental evaluations on large-scale models show that the proposed framework achieves competitive performance compared with state-of-the-art quantization techniques. Importantly, these results are obtained without any post-quantization fine-tuning, highlighting the efficiency and practical applicability of our approach for deploying highly compressed LLMs.
Anthology ID:
2026.lrec-main.789
Volume:
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Month:
May
Year:
2026
Address:
Palma de Mallorca, Spain
Editors:
Stelios Piperidis, Núria Bel, Henk van den Heuvel, Nancy Ide, Simon Krek, Antonio Toral
Venue:
LREC
SIG:
Publisher:
ELRA Language Resource Association
Note:
Pages:
10057–10070
Language:
URL:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.789/
DOI:
Bibkey:
Cite (ACL):
Bao Tan Duy Huynh, Takashi Tsunakawa, and Masafumi Nishida. 2026. Distribution-aware Low-bitwidth Quantization for Large Language Models. International Conference on Language Resources and Evaluation, main:10057–10070.
Cite (Informal):
Distribution-aware Low-bitwidth Quantization for Large Language Models (Huynh et al., LREC 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.789.pdf