Neutralizing Bias in LLM Reasoning using Entailment Graphs

Liang Cheng, Tianyi Li, Zhaowei Wang, Tianyang Liu, Mark Steedman


Abstract
LLMs are often claimed to be capable of Natural Language Inference (NLI), which is widely regarded as a cornerstone of more complex forms of reasoning. However, recent works show that LLMs still suffer from hallucinations in NLI due to attestation bias, where LLMs overly rely on propositional memory to build shortcuts. To solve the issue, we design an unsupervised framework to construct counterfactual reasoning data and fine-tune LLMs to reduce attestation bias. To measure bias reduction, we build bias-adversarial variants of NLI datasets with randomly replaced predicates in premises while keeping hypotheses unchanged. Extensive evaluations show that our framework can significantly reduce hallucinations from attestation bias. Then, we further evaluate LLMs fine-tuned with our framework on original NLI datasets and their bias-neutralized versions, where original entities are replaced with randomly sampled ones. Extensive results show that our framework consistently improves inferential performance on both original and bias-neutralized NLI datasets.
Anthology ID:
2025.findings-acl.705
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13714–13730
Language:
URL:
https://preview.aclanthology.org/transition-to-people-yaml/2025.findings-acl.705/
DOI:
10.18653/v1/2025.findings-acl.705
Bibkey:
Cite (ACL):
Liang Cheng, Tianyi Li, Zhaowei Wang, Tianyang Liu, and Mark Steedman. 2025. Neutralizing Bias in LLM Reasoning using Entailment Graphs. In Findings of the Association for Computational Linguistics: ACL 2025, pages 13714–13730, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Neutralizing Bias in LLM Reasoning using Entailment Graphs (Cheng et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/transition-to-people-yaml/2025.findings-acl.705.pdf