Verifying the Steps of Deductive Reasoning Chains

Zacchary Sadeddine, Fabian M. Suchanek


Abstract
As Large Language Models penetrate everyday life more and more, it becomes essential to measure the correctness of their output. Inthis paper, we propose a novel task: the automatic verification of individual reasoning steps in a logical deductive Chain-of-Thought. Thistask addresses two well-known problems of LLMs, hallucination and incorrect reasoning. We propose a new dataset of logical reasoningchains, in which the individual deduction steps have been manually annotated for soundness, and benchmark several methods on it. We findthat LLMs can detect unsound reasoning steps fairly well, but argue that verification has to be performed by transparent methods instead.We test symbolic methods, but find that they under-perform. We develop a neuro-symbolic baseline called VANESSA that comes closer to the performance of LLMs.
Anthology ID:
2025.findings-acl.25
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venues:
Findings | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
456–475
Language:
URL:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.findings-acl.25/
DOI:
Bibkey:
Cite (ACL):
Zacchary Sadeddine and Fabian M. Suchanek. 2025. Verifying the Steps of Deductive Reasoning Chains. In Findings of the Association for Computational Linguistics: ACL 2025, pages 456–475, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Verifying the Steps of Deductive Reasoning Chains (Sadeddine & Suchanek, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.findings-acl.25.pdf