Reason from Future: Reverse Thought Chain Enhances LLM Reasoning

Yinlong Xu, Yanzhao Zheng, Shuoshuo Sun, Shuaihan Huang, Baohua Dong, Zhu Hangcheng, Ruohui Huang, Gang Yu, Hongxia Xu, Jian Wu


Abstract
It has been demonstrated that carefully designed reasoning paradigms, like Chain-of-Thought(CoT) and Tree-of-Thought(ToT), can enhance the reasoning capabilities of small language models by detailed thinking and extensive thought searching, unbounded branching factors in the searching space create prohibitive reasoning consumption. However these methods fell into the trap of local optimum reasoning, which means the model lacks a global perspective while solving problems. We propose a novel reasoning paradigm called Reason from Future(RFF), which generates reasoning paths by bidirectional reasoning that combines top-down planning with bottom-up reasoning accumulation. The essence of RFF lies in its reverse reasoning mechanism, which prioritizes core logical relationships and imposes goal-oriented constraints on intermediate steps, thereby reducing the searching space and mitigating error accumulation inherent in sequential forward reasoning. Empirical evaluations across diverse experiments demonstrate that RFF outperforms conventional paradigms with higher accuracy and less searching space to solve complex tasks.
Anthology ID:
2025.findings-acl.1290
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
25153–25166
Language:
URL:
https://preview.aclanthology.org/corrections-2025-08/2025.findings-acl.1290/
DOI:
10.18653/v1/2025.findings-acl.1290
Bibkey:
Cite (ACL):
Yinlong Xu, Yanzhao Zheng, Shuoshuo Sun, Shuaihan Huang, Baohua Dong, Zhu Hangcheng, Ruohui Huang, Gang Yu, Hongxia Xu, and Jian Wu. 2025. Reason from Future: Reverse Thought Chain Enhances LLM Reasoning. In Findings of the Association for Computational Linguistics: ACL 2025, pages 25153–25166, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Reason from Future: Reverse Thought Chain Enhances LLM Reasoning (Xu et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/corrections-2025-08/2025.findings-acl.1290.pdf