Learning to Insert [PAUSE] Tokens for Better Reasoning

Eunki Kim, Sangryul Kim, James Thorne


Abstract
To enhance reasoning capabilities, previous works have explored incorporating special-purpose tokens into the training process. These strategies strengthen the learning mechanism of transformer-based large language models (LLMs). Building on prior research, in which inserting dummy tokens consecutively just before reasoning steps can enhance effectiveness, we introduce a novel approach termed Dynamic Inserting Tokens Training (DIT). Our method identifies positions within sequences where model confidence is lowest according to token log-likelihood. Strategically inserting [PAUSE] tokens on these positions bolsters the model’s predictive capabilities for subsequent tokens. Experimental results across diverse datasets and models, from the 2.7B model to the 8B model, demonstrate that DIT consistently outperforms traditional fine-tuning and previous token insertion methods. With this simple yet effective method, we achieve accuracy gains of up to 4.7%p on GSM8K, 3.23%p on AQUA-RAT, and pass@1 improvements of up to 3.4%p on MBPP datasets. Our work shows a model-based, dynamic approach rather than a heuristic one, thereby broadening the scope of research in reasoning.
Anthology ID:
2025.findings-acl.1217
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
23760–23777
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-acl.1217/
DOI:
Bibkey:
Cite (ACL):
Eunki Kim, Sangryul Kim, and James Thorne. 2025. Learning to Insert [PAUSE] Tokens for Better Reasoning. In Findings of the Association for Computational Linguistics: ACL 2025, pages 23760–23777, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Learning to Insert [PAUSE] Tokens for Better Reasoning (Kim et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-acl.1217.pdf