Towards Faithful Natural Language Explanations: A Study Using Activation Patching in Large Language Models

Wei Jie Yeo, Ranjan Satapathy, Erik Cambria


Abstract
Large Language Models (LLMs) are capable of generating persuasive Natural Language Explanations (NLEs) to justify their answers. However, the faithfulness of these explanations should not be readily trusted at face value. Recent studies have proposed various methods to measure the faithfulness of NLEs, typically by inserting perturbations at the explanation or feature level. We argue that these approaches are neither comprehensive nor correctly designed according to the established definition of faithfulness. Moreover, we highlight the risks of grounding faithfulness findings on out-of-distribution samples. In this work, we leverage a causal mediation technique called activation patching, to measure the faithfulness of an explanation towards supporting the explained answer. Our proposed metric, Causal Faithfulness quantifies the consistency of causal attributions between explanations and the corresponding model outputs as the indicator of faithfulness. We experimented across models varying from 2B to 27B parameters and found that models that underwent alignment-tuning tend to produce more faithful and plausible explanations. We find that Causal Faithfulness is a promising improvement over existing faithfulness tests by taking into account the model’s internal computations and avoiding out-of-distribution concerns that could otherwise undermine the validity of faithfulness assessments.
Anthology ID:
2025.emnlp-main.529
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10436–10458
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.529/
DOI:
Bibkey:
Cite (ACL):
Wei Jie Yeo, Ranjan Satapathy, and Erik Cambria. 2025. Towards Faithful Natural Language Explanations: A Study Using Activation Patching in Large Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 10436–10458, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Towards Faithful Natural Language Explanations: A Study Using Activation Patching in Large Language Models (Yeo et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.529.pdf
Checklist:
 2025.emnlp-main.529.checklist.pdf