Prejudge-Before-Think: Enhancing Large Language Models at Test-Time by Process Prejudge Reasoning
Jianing Wang, Jin Jiang, Yang Liu, Mengdi Zhang, Xunliang Cai
Abstract
In this paper, we introduce a new process prejudge strategy in LLM reasoning to demonstrate that bootstrapping with process prejudge allows the LLM to adaptively anticipate the errors encountered when advancing the subsequent reasoning steps, similar to people sometimes pausing to think about what mistakes may occur and how to avoid them, rather than relying solely on trial and error. Specifically, we define a prejudge node in the rationale, which represents a reasoning step, with at least one step that follows the prejudge node that has no paths toward the correct answer. To synthesize the prejudge reasoning process, we present an automated reasoning framework with a dynamic tree-searching strategy. This framework requires only one LLM to perform answer judging, response critiquing, prejudge generation, and thought completion. Furthermore, we develop a two-phase training mechanism with supervised fine-tuning (SFT) and reinforcement learning (RL) to further enhance the reasoning capabilities of LLMs. Experimental results from competition-level complex reasoning demonstrate that our method can teach the model to prejudge before thinking and significantly enhance the reasoning ability of LLMs .- Anthology ID:
- 2025.findings-emnlp.250
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2025
- Month:
- November
- Year:
- 2025
- Address:
- Suzhou, China
- Editors:
- Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4656–4673
- Language:
- URL:
- https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.250/
- DOI:
- 10.18653/v1/2025.findings-emnlp.250
- Cite (ACL):
- Jianing Wang, Jin Jiang, Yang Liu, Mengdi Zhang, and Xunliang Cai. 2025. Prejudge-Before-Think: Enhancing Large Language Models at Test-Time by Process Prejudge Reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 4656–4673, Suzhou, China. Association for Computational Linguistics.
- Cite (Informal):
- Prejudge-Before-Think: Enhancing Large Language Models at Test-Time by Process Prejudge Reasoning (Wang et al., Findings 2025)
- PDF:
- https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.250.pdf