Stepwise Informativeness Search for Improving LLM Reasoning

Siyuan Wang, Enda Zhao, Xiang Ren


Abstract
Advances in Large Language Models (LLMs) have improved multi-step reasoning by generating free-text rationales, but these models tend to lose focus over the middle of long contexts. This raises concerns that as reasoning progresses, LLMs may overlook information in earlier steps when decoding subsequent steps, leading to unreliable and redundant rationales. To address this, we propose guiding LLMs to generate more accurate and concise rationales by (1) proactively referencing information from underutilized prior steps, and (2) minimizing redundant information between new and existing steps. We introduce stepwise informativeness search, an inference-time tree search framework incorporating two selection heuristics: grounding-guided selection which prioritizes steps paying higher attention over underutilized steps; and novelty-guided selection which encourages steps with novel conclusions. We further utilize a self-grounding strategy that prompts LLMs to explicitly reference relevant prior steps as premises before deduction at each step, mitigating distraction from irrelevant content. Experiments on five reasoning datasets across five LLMs show the effectiveness and efficiency of our approach to improve reasoning with reduced errors and redundancy.
Anthology ID:
2025.emnlp-main.1285
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
25291–25309
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1285/
DOI:
Bibkey:
Cite (ACL):
Siyuan Wang, Enda Zhao, and Xiang Ren. 2025. Stepwise Informativeness Search for Improving LLM Reasoning. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 25291–25309, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Stepwise Informativeness Search for Improving LLM Reasoning (Wang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1285.pdf
Checklist:
 2025.emnlp-main.1285.checklist.pdf