Measuring Chain of Thought Faithfulness by Unlearning Reasoning Steps

Martin Tutek, Fateme Hashemi Chaleshtori, Ana Marasovic, Yonatan Belinkov


Abstract
When prompted to think step-by-step, language models (LMs) produce a chain of thought (CoT), a sequence of reasoning steps that the model supposedly used to produce its prediction. Despite much work on CoT prompting, it is unclear if reasoning verbalized in a CoT is faithful to the models’ parametric beliefs. We introduce a framework for measuring parametric faithfulness of generated reasoning and propose Faithfulness by Unlearning Reasoning steps (FUR), an instance of this framework. FUR erases information contained in reasoning steps from model parameters and measures faithfulness as the resulting effect on the model’s prediction. Our experiments with four LMs and five multi-choice question answering (MCQA) datasets show that FUR is frequently able to precisely change the underlying models’ prediction for a given instance by unlearning key steps, indicating when a CoT is parametrically faithful. Further analysis shows that CoTs generated by models post-unlearning support different answers, hinting at a deeper effect of unlearning.
Anthology ID:
2025.emnlp-main.504
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9946–9971
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.504/
DOI:
Bibkey:
Cite (ACL):
Martin Tutek, Fateme Hashemi Chaleshtori, Ana Marasovic, and Yonatan Belinkov. 2025. Measuring Chain of Thought Faithfulness by Unlearning Reasoning Steps. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 9946–9971, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Measuring Chain of Thought Faithfulness by Unlearning Reasoning Steps (Tutek et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.504.pdf
Checklist:
 2025.emnlp-main.504.checklist.pdf