Improving Consistency in LLM Inference using Probabilistic Tokenization

Ashutosh Sathe, Divyanshu Aggarwal, Sunayana Sitaram


Abstract
Prior research has demonstrated noticeable performance gains through the use of probabilistic tokenizations, an approach that involves employing multiple tokenizations of the same input string during the training phase of a language model. Despite these promising findings, modern large language models (LLMs) have yet to be trained using probabilistic tokenizations. Interestingly, while the tokenizers of these contemporary LLMs have the capability to generate multiple tokenizations, this property remains underutilized.In this work, we propose a novel method to leverage the multiple tokenization capabilities of modern LLM tokenizers, aiming to enhance the self-consistency of LLMs in reasoning tasks. Our experiments indicate that when utilizing probabilistic tokenizations, LLMs generate logically diverse reasoning paths, moving beyond mere surface-level linguistic diversity. We carefully study probabilistic tokenization and offer insights to explain the self consistency improvements it brings through extensive experimentation on 5 LLM families and 4 reasoning benchmarks.
Anthology ID:
2025.findings-naacl.268
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4766–4778
Language:
URL:
https://preview.aclanthology.org/moar-dois/2025.findings-naacl.268/
DOI:
10.18653/v1/2025.findings-naacl.268
Bibkey:
Cite (ACL):
Ashutosh Sathe, Divyanshu Aggarwal, and Sunayana Sitaram. 2025. Improving Consistency in LLM Inference using Probabilistic Tokenization. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 4766–4778, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Improving Consistency in LLM Inference using Probabilistic Tokenization (Sathe et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/moar-dois/2025.findings-naacl.268.pdf