Read Your Own Mind: Reasoning Helps Surface Self-Confidence Signals in LLMs

Jakub Podolak, Rajeev Verma


Abstract
We study the source of uncertainty in DeepSeek R1-32B by analyzing its self-reported verbal confidence on question answering (QA) tasks. In the default answer-then-confidence setting, the model is regularly over-confident, whereas semantic entropy - obtained by sampling many responses - remains reliable. We hypothesize that this is because of semantic entropy’s larger test-time compute, which lets us explore the model’s predictive distribution. We show that granting DeepSeek the budget to explore its distribution by forcing a long chain-of-thought before the final answer greatly improves its verbal score effectiveness, even on simple fact-retrieval questions that normally require no reasoning. Our analysis concludes that reliable uncertainty estimation requires explicit exploration of the generative space, and self-reported confidence is trustworthy only after such exploration.
Anthology ID:
2025.uncertainlp-main.21
Volume:
Proceedings of the 2nd Workshop on Uncertainty-Aware NLP (UncertaiNLP 2025)
Month:
November
Year:
2025
Address:
Suzhou, China
Editor:
Noidea Noidea
Venues:
UncertaiNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
247–258
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.uncertainlp-main.21/
DOI:
Bibkey:
Cite (ACL):
Jakub Podolak and Rajeev Verma. 2025. Read Your Own Mind: Reasoning Helps Surface Self-Confidence Signals in LLMs. In Proceedings of the 2nd Workshop on Uncertainty-Aware NLP (UncertaiNLP 2025), pages 247–258, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Read Your Own Mind: Reasoning Helps Surface Self-Confidence Signals in LLMs (Podolak & Verma, UncertaiNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.uncertainlp-main.21.pdf