StepSearch: Igniting LLMs Search Ability via Step-Wise Proximal Policy Optimization

Xuhui Zheng, Kang An, Ziliang Wang, Yuhang Wang, Yichao Wu


Abstract
Efficient multi-hop reasoning requires Large Language Models (LLMs) based agents to acquire high-value external knowledge iteratively. Previous work has explored reinforcement learning (RL) to train LLMs to perform search-based document retrieval, achieving notable improvements in QA performance, but underperform on complex, multi-hop QA resulting from the sparse rewards from global signal only. To address this gap in existing research, we introduce StepSearch, a framework for search LLMs that trained with step-wise proximal policy optimization method. It consists of richer and more detailed intermediate search rewards and token-level process supervision based on information gain and redundancy penalties to better guide each search step. We constructed a fine-grained question-answering dataset containing sub-question-level search trajectories based on open source datasets through a set of data pipeline method. On standard multi-hop QA benchmarks, it significantly outperforms global-reward baselines, achieving 11.2% and 4.2% absolute improvements for 3B and 7B models over various search with RL baselines using only 19k training data, demonstrating the effectiveness of fine-grained, stepwise supervision in optimizing deep search LLMs. The project is open source at https://github.com/Zillwang/StepSearch
Anthology ID:
2025.emnlp-main.1106
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21816–21841
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1106/
DOI:
Bibkey:
Cite (ACL):
Xuhui Zheng, Kang An, Ziliang Wang, Yuhang Wang, and Yichao Wu. 2025. StepSearch: Igniting LLMs Search Ability via Step-Wise Proximal Policy Optimization. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 21816–21841, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
StepSearch: Igniting LLMs Search Ability via Step-Wise Proximal Policy Optimization (Zheng et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1106.pdf
Checklist:
 2025.emnlp-main.1106.checklist.pdf