A Closer Look at Bias and Chain-of-Thought Faithfulness of Large (Vision) Language Models

Sriram Balasubramanian, Samyadeep Basu, Soheil Feizi


Abstract
Chain-of-thought (CoT) reasoning enhances performance of large language models, but questions remain about whether these reasoning traces faithfully reflect the internal processes of the model. We present the first comprehensive study of CoT faithfulness in large vision-language models (LVLMs), investigating how both text-based and previously unexplored image-based biases affect reasoning and bias articulation. Our work introduces a novel, fine-grained evaluation pipeline for categorizing bias articulation patterns, enabling significantly more precise analysis of CoT reasoning than previous methods. This framework reveals critical distinctions in how models process and respond to different types of biases, providing new insights into LVLM CoT faithfulness. Our findings reveal that subtle image-based biases are rarely articulated compared to explicit text-based ones, even in models specialized for reasoning. Additionally, many models exhibit a previously unidentified phenomenon we term “inconsistent” reasoning - correctly reasoning before abruptly changing answers, serving as a potential canary for detecting biased reasoning from unfaithful CoTs. We then apply the same evaluation pipeline to revisit CoT faithfulness in LLMs across various levels of implicit cues. Our findings reveal that current language-only reasoning models continue to struggle with articulating cues that are not overtly stated.
Anthology ID:
2025.findings-emnlp.723
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13406–13439
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.723/
DOI:
10.18653/v1/2025.findings-emnlp.723
Bibkey:
Cite (ACL):
Sriram Balasubramanian, Samyadeep Basu, and Soheil Feizi. 2025. A Closer Look at Bias and Chain-of-Thought Faithfulness of Large (Vision) Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 13406–13439, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
A Closer Look at Bias and Chain-of-Thought Faithfulness of Large (Vision) Language Models (Balasubramanian et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.723.pdf
Checklist:
 2025.findings-emnlp.723.checklist.pdf