Calibrating Beyond English: Language Diversity for Better Quantized Multilingual LLMs

Everlyn Asiko Chimoto, Mostafa Elhoushi, Bruce Bassett


Abstract
Quantization is an effective technique for reducing the storage footprint and computational costs of Large Language Models (LLMs), but it often results in performance degradation. Existing post-training quantization methods typically use small, English-only calibration sets; however, their impact on multilingual models remains underexplored. We systematically evaluate eight calibration settings (five single-language and three multilingual mixes) across two quantizers (GPTQ, AWQ) on data from 10 different languages. Our findings reveal a consistent trend: non-English and multilingual calibration sets significantly improve perplexity compared to English-only baselines. Specifically, we observe notable average perplexity gains across both quantizers on Llama3.1 8B and Qwen2.5 7B, with multilingual mixes achieving the largest overall reductions of up to 3.52 perplexity gain. Furthermore, our analysis indicates that tailoring calibration sets to the evaluation language yields the largest improvements for individual languages, underscoring the importance of linguistic alignment. We also identify specific failure cases where certain language-quantizer combinations degrade performance, which we trace to differences in activation range distributions across languages. These results highlight that static, one-size-fits-all calibration is suboptimal, and that tailoring calibration data, both in language and diversity, plays a crucial role in robustly quantizing multilingual LLMs.
Anthology ID:
2026.eacl-long.223
Volume:
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4822–4838
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.223/
DOI:
Bibkey:
Cite (ACL):
Everlyn Asiko Chimoto, Mostafa Elhoushi, and Bruce Bassett. 2026. Calibrating Beyond English: Language Diversity for Better Quantized Multilingual LLMs. In Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4822–4838, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Calibrating Beyond English: Language Diversity for Better Quantized Multilingual LLMs (Chimoto et al., EACL 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.223.pdf