Large Language Models Have Intrinsic Meta-Cognition, but Need a Good Lens

Ziyang Ma, Qingyue Yuan, Zhenglin Wang, Deyu Zhou


Abstract
Previous research has primarily focused on the cognitive error detection capabilities of Large Language Models (LLMs), often prompting them to analyze mistakes in reasoning chains. However, few studies have examined the meta-cognitive abilities of LLMs (e.g., their self-awareness of step errors), which are crucial for their reliability. While studies on LLM self-evaluation present some measures, such as perplexity, which can reflect the answer correctness and be viewed as the lens of meta-cognition, they lack step-level analysis and adaptation. This paper studies the evaluation of LLM meta-cognition using the current lenses and how to improve these lenses. Specifically, we propose AutoMeco, an Automated Meta-cognition Evaluation framework for benchmarking the existing lenses. Furthermore, a training-free Markovian Intrinsic Reward Adjustment strategy, MIRA, is proposed to boost current meta-cognition lenses. Experimental results on three mathematical reasoning datasets and three LLMs show the reasonableness of AutoMeco by comparing it with Best-of-N verification. Moreover, the meta-cognition ability of LLMs can be better evaluated using MIRA.
Anthology ID:
2025.emnlp-main.171
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3460–3477
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.171/
DOI:
Bibkey:
Cite (ACL):
Ziyang Ma, Qingyue Yuan, Zhenglin Wang, and Deyu Zhou. 2025. Large Language Models Have Intrinsic Meta-Cognition, but Need a Good Lens. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 3460–3477, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Large Language Models Have Intrinsic Meta-Cognition, but Need a Good Lens (Ma et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.171.pdf
Checklist:
 2025.emnlp-main.171.checklist.pdf