Memorization Understanding: Do Large Language Models Have the Ability of Scenario Cognition?

Boxiang Ma, Ru Li, Wang Yuanlong, Hongye Tan, Xiaoli Li


Abstract
Driven by vast and diverse textual data, large language models (LLMs) have demonstrated impressive performance across numerous natural language processing (NLP) tasks. Yet, a critical question persists: does their generalization arise from mere memorization of training data or from deep semantic understanding? To investigate this, we propose a bi-perspective evaluation framework to assess LLMs’ scenario cognition—the ability to link semantic scenario elements with their arguments in context. Specifically, we introduce a novel scenario-based dataset comprising diverse textual descriptions of fictional facts, annotated with scenario elements. LLMs are evaluated through their capacity to answer scenario-related questions (model output perspective) and via probing their internal representations for encoded scenario elements-argument associations (internal representation perspective). Our experiments reveal that current LLMs predominantly rely on superficial memorization, failing to achieve robust semantic scenario cognition, even in simple cases. These findings expose critical limitations in LLMs’ semantic understanding and offer cognitive insights for advancing their capabilities.
Anthology ID:
2025.emnlp-main.1047
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20758–20774
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1047/
DOI:
Bibkey:
Cite (ACL):
Boxiang Ma, Ru Li, Wang Yuanlong, Hongye Tan, and Xiaoli Li. 2025. Memorization ≠ Understanding: Do Large Language Models Have the Ability of Scenario Cognition?. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 20758–20774, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Memorization ≠ Understanding: Do Large Language Models Have the Ability of Scenario Cognition? (Ma et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1047.pdf
Checklist:
 2025.emnlp-main.1047.checklist.pdf