Investigating Context Faithfulness in Large Language Models: The Roles of Memory Strength and Evidence Style

Yuepei Li, Kang Zhou, Qiao Qiao, Bach Nguyen, Qing Wang, Qi Li


Abstract
Retrieval-augmented generation (RAG) improves Large Language Models (LLMs) by incorporating external information into the response generation process. However, how context-faithful LLMs are and what factors influence LLMs’ context faithfulness remain largely unexplored. In this study, we investigate the impact of memory strength and evidence presentation on LLMs’ receptiveness to external evidence. We quantify the memory strength of LLMs by measuring the divergence in LLMs’ responses to different paraphrases of the same question, which is not considered by previous works. We also generate evidence in various styles to examine LLMs’ behavior. Our results show that for questions with high memory strength, LLMs are more likely to rely on internal memory. Furthermore, presenting paraphrased evidence significantly increases LLMs’ receptiveness compared to simple repetition or adding details. These findings provide key insights for improving retrieval-augmented generation and context-aware LLMs. Our code is available at https://github.com/liyp0095/ContextFaithful.
Anthology ID:
2025.findings-acl.247
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venues:
Findings | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4789–4807
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.247/
DOI:
Bibkey:
Cite (ACL):
Yuepei Li, Kang Zhou, Qiao Qiao, Bach Nguyen, Qing Wang, and Qi Li. 2025. Investigating Context Faithfulness in Large Language Models: The Roles of Memory Strength and Evidence Style. In Findings of the Association for Computational Linguistics: ACL 2025, pages 4789–4807, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Investigating Context Faithfulness in Large Language Models: The Roles of Memory Strength and Evidence Style (Li et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.247.pdf