Thought calibration: Efficient and confident test-time scaling

Menghua Wu, Cai Zhou, Stephen Bates, Tommi Jaakkola


Abstract
Reasoning large language models achieve impressive test-time scaling by thinking for longer, but this performance gain comes at significant compute cost. Directly limiting test-time budget hurts overall performance, but not all problems are equally difficult. We propose thought calibration to decide dynamically when thinking can be terminated. To calibrate our decision rule, we view a language model’s growing body of thoughts as a nested sequence of reasoning trees, where the goal is to identify the point at which novel reasoning plateaus. We realize this framework through lightweight probes that operate on top of the language model’s hidden representations, which are informative of both the reasoning structure and overall consistency of response. Based on three reasoning language models and four datasets, thought calibration preserves model performance with up to a 60% reduction in thinking tokens on in-distribution data, and up to 20% in out-of-distribution data.
Anthology ID:
2025.emnlp-main.722
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14302–14316
Language:
URL:
https://preview.aclanthology.org/ingest-luhme/2025.emnlp-main.722/
DOI:
10.18653/v1/2025.emnlp-main.722
Bibkey:
Cite (ACL):
Menghua Wu, Cai Zhou, Stephen Bates, and Tommi Jaakkola. 2025. Thought calibration: Efficient and confident test-time scaling. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 14302–14316, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Thought calibration: Efficient and confident test-time scaling (Wu et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-luhme/2025.emnlp-main.722.pdf
Checklist:
 2025.emnlp-main.722.checklist.pdf