Pre-trained Models Perform the Best When Token Distributions Follow Zipf’s Law

Yanjin He, Qingkai Zeng, Meng Jiang


Abstract
Tokenization is a fundamental step in natural language processing (NLP) and other sequence modeling domains, where the choice of vocabulary size significantly impacts model performance. Despite its importance, selecting an optimal vocabulary size remains underexplored, typically relying on heuristics or dataset-specific choices. In this work, we propose a principled method for determining the vocabulary size by analyzing token frequency distributions through Zipf’s law. We show that downstream task performance correlates with how closely token distributions follow power-law behavior, and that aligning with Zipfian scaling improves both model efficiency and effectiveness. Extensive experiments across NLP, genomics, and chemistry demonstrate that models consistently achieve peak performance when the token distribution closely adheres to Zipf’s law, establishing Zipfian alignment as a robust and generalizable criterion for vocabulary size selection. The code and data are available at: https://github.com/yanjinhe/Tokenizer
Anthology ID:
2025.emnlp-main.1421
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
27997–28009
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1421/
DOI:
Bibkey:
Cite (ACL):
Yanjin He, Qingkai Zeng, and Meng Jiang. 2025. Pre-trained Models Perform the Best When Token Distributions Follow Zipf’s Law. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 27997–28009, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Pre-trained Models Perform the Best When Token Distributions Follow Zipf’s Law (He et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1421.pdf
Checklist:
 2025.emnlp-main.1421.checklist.pdf