From Remembering to Metacognition: Do Existing Benchmarks Accurately Evaluate LLMs?

Geng Zhang, Yizhou Ying, Sihang Jiang, Jiaqing Liang, Guanglei Yue, Yifei Fu, Hailin Hu, Yanghua Xiao


Abstract
Despite the rapid development of large language models (LLMs), existing benchmark datasets often focus on low-level cognitive tasks, such as factual recall and basic comprehension, while providing limited coverage of higher-level reasoning skills, including analysis, evaluation, and creation. In this work, we systematically assess the cognitive depth of popular LLM benchmarks using Bloom’s Taxonomy to evaluate both the cognitive and knowledge dimensions.Our analysis reveals a pronounced imbalance: most datasets concentrate on “Remembering” and “Understanding”, with metacognitive and creative reasoning largely underrepresented. We also find that incorporating higher-level cognitive instructions into the current instruction fine-tuning process improves model performance. These findings highlight the importance of future benchmarks incorporating metacognitive evaluations to more accurately assess and enhance model performance.
Anthology ID:
2025.findings-emnlp.724
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13440–13457
Language:
URL:
https://preview.aclanthology.org/name-variant-enfa-fane/2025.findings-emnlp.724/
DOI:
10.18653/v1/2025.findings-emnlp.724
Bibkey:
Cite (ACL):
Geng Zhang, Yizhou Ying, Sihang Jiang, Jiaqing Liang, Guanglei Yue, Yifei Fu, Hailin Hu, and Yanghua Xiao. 2025. From Remembering to Metacognition: Do Existing Benchmarks Accurately Evaluate LLMs?. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 13440–13457, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
From Remembering to Metacognition: Do Existing Benchmarks Accurately Evaluate LLMs? (Zhang et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/name-variant-enfa-fane/2025.findings-emnlp.724.pdf
Checklist:
 2025.findings-emnlp.724.checklist.pdf