Think Just Enough: Leveraging Self-Assessed Confidence for Adaptive Reasoning in Language Models

Junyeob Kim, Sang-goo Lee, Taeuk Kim


Abstract
Recent reinforcement learning (RL)-trained language models have demonstrated strong performance on complex reasoning tasks by producing long and detailed reasoning traces. However, despite these advancements, they often struggle with finding the right balance in reasoning length: some terminate prematurely before reaching a correct answer (underthinking), while others continue reasoning beyond necessity, leading to inefficiency or even degraded accuracy (overthinking).To address these challenges, we propose a method for optimizing reasoning length via self-assessed confidence. By prompting the model to evaluate its own confidence at intermediate reasoning steps, we enable dynamic stopping once sufficient reasoning is achieved.Experiments across multiple reasoning benchmarks show that our approach improves computational efficiency without compromising answer quality. Furthermore, we find that confidence estimates from RL-trained reasoning models are more reliable than those from standard LLMs, making it a valuable internal signal for controlling reasoning depth.
Anthology ID:
2026.findings-eacl.263
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5000–5006
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.263/
DOI:
Bibkey:
Cite (ACL):
Junyeob Kim, Sang-goo Lee, and Taeuk Kim. 2026. Think Just Enough: Leveraging Self-Assessed Confidence for Adaptive Reasoning in Language Models. In Findings of the Association for Computational Linguistics: EACL 2026, pages 5000–5006, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Think Just Enough: Leveraging Self-Assessed Confidence for Adaptive Reasoning in Language Models (Kim et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.263.pdf
Checklist:
 2026.findings-eacl.263.checklist.pdf