Exploring Backward Reasoning in Large Language Models

Leonardo Ranaldi, Giulia Pucci


Abstract
Multi-step reasoning through in-context learning strategies have been extensively explored, highlighting the abilities of Large Language Models (LLMs) to generate answers derived from step-by-step reasoning. These studies focus the attention on LLMs’ forward reasoning abilities epitomised in a series of general premises leading to a final solution. In this paper, by taking the reverse perspective, we study the backward reasoning abilities of LLMs, namely the inference that leads to the causal hypothesis. Behind formalising the backward problems, we analyse whether the LLMs are able to reason about the conclusion and reconstruct the original question that led to the delivery of the final answer. Operating with question-answering tasks involving symbolic reasoning, understanding, and commonsense abilities, we observe that the proposed models reveal robust comprehension capabilities managing different kinds of input; however, they are not always able to reason in the backward direction. Finally, to challenge this limitation, we demonstrate that instructing LLMs to generate the answer by reconsidering the structure of the problem allows for improved backward reasoning direction.
Anthology ID:
2025.findings-naacl.366
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6571–6586
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.366/
DOI:
Bibkey:
Cite (ACL):
Leonardo Ranaldi and Giulia Pucci. 2025. Exploring Backward Reasoning in Large Language Models. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 6571–6586, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Exploring Backward Reasoning in Large Language Models (Ranaldi & Pucci, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.366.pdf