Mind the Gap: A Closer Look at Tokenization for Multiple-Choice Question Answering with LLMs

Mario Sanz-Guerrero, Minh Duc Bui, Katharina von der Wense


Abstract
When evaluating large language models (LLMs) with multiple-choice question answering (MCQA), it is common to end the prompt with the string “*Answer:*” to facilitate automated answer extraction via next-token probabilities. However, there is no consensus on how to tokenize the space following the colon, often overlooked as a trivial choice. In this paper, we uncover accuracy differences of up to 11% due to this (seemingly irrelevant) tokenization variation as well as reshuffled model rankings, raising concerns about the reliability of LLM comparisons in prior work. Surprisingly, we are able to recommend one specific strategy – tokenizing the space *together* with the answer letter – as we observe consistent and statistically significant performance improvements. Additionally, it improves model calibration, enhancing the reliability of the model’s confidence estimates. Our findings underscore the importance of careful evaluation design and highlight the need for standardized, transparent evaluation protocols to ensure reliable and comparable results.
Anthology ID:
2025.emnlp-main.988
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
19584–19594
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.988/
DOI:
Bibkey:
Cite (ACL):
Mario Sanz-Guerrero, Minh Duc Bui, and Katharina von der Wense. 2025. Mind the Gap: A Closer Look at Tokenization for Multiple-Choice Question Answering with LLMs. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 19584–19594, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Mind the Gap: A Closer Look at Tokenization for Multiple-Choice Question Answering with LLMs (Sanz-Guerrero et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.988.pdf
Checklist:
 2025.emnlp-main.988.checklist.pdf