LLMs meet Bloom’s Taxonomy: A Cognitive View on Large Language Model Evaluations

Thomas Huber, Christina Niklaus


Abstract
Current evaluation approaches for Large Language Models (LLMs) lack a structured approach that reflects the underlying cognitive abilities required for solving the tasks. This hinders a thorough understanding of the current level of LLM capabilities. For instance, it is widely accepted that LLMs perform well in terms of grammar, but it is unclear in what specific cognitive areas they excel or struggle in. This paper introduces a novel perspective on the evaluation of LLMs that leverages a hierarchical classification of tasks. Specifically, we explore the most widely used benchmarks for LLMs to systematically identify how well these existing evaluation methods cover the levels of Bloom’s Taxonomy, a hierarchical framework for categorizing cognitive skills. This comprehensive analysis allows us to identify strengths and weaknesses in current LLM assessment strategies in terms of cognitive abilities and suggest directions for both future benchmark development as well as highlight potential avenues for LLM research. Our findings reveal that LLMs generally perform better on the lower end of Bloom’s Taxonomy. Additionally, we find that there are significant gaps in the coverage of cognitive skills in the most commonly used benchmarks.
Anthology ID:
2025.coling-main.350
Volume:
Proceedings of the 31st International Conference on Computational Linguistics
Month:
January
Year:
2025
Address:
Abu Dhabi, UAE
Editors:
Owen Rambow, Leo Wanner, Marianna Apidianaki, Hend Al-Khalifa, Barbara Di Eugenio, Steven Schockaert
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5211–5246
Language:
URL:
https://preview.aclanthology.org/jlcl-multiple-ingestion/2025.coling-main.350/
DOI:
Bibkey:
Cite (ACL):
Thomas Huber and Christina Niklaus. 2025. LLMs meet Bloom’s Taxonomy: A Cognitive View on Large Language Model Evaluations. In Proceedings of the 31st International Conference on Computational Linguistics, pages 5211–5246, Abu Dhabi, UAE. Association for Computational Linguistics.
Cite (Informal):
LLMs meet Bloom’s Taxonomy: A Cognitive View on Large Language Model Evaluations (Huber & Niklaus, COLING 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/jlcl-multiple-ingestion/2025.coling-main.350.pdf