Yonghao Shi
2025
Step-level Verifier-guided Hybrid Test-Time Scaling for Large Language Models
Kaiyan Chang
|
Yonghao Shi
|
Chenglong Wang
|
Hang Zhou
|
Chi Hu
|
Xiaoqian Liu
|
Yingfeng Luo
|
Yuan Ge
|
Tong Xiao
|
JingBo Zhu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Test-Time Scaling (TTS) is a promising approach to progressively elicit the model’s intelligence during inference. Recently, training-based TTS methods, such as continued reinforcement learning (RL), have further surged in popularity, while training-free TTS methods are gradually fading from prominence. However, the additional computation overhead of training amplifies the burden on test-time scaling.In this paper, we focus on training-free TTS methods for reasoning. We first design Conditional Step-level Self-refinement, a fine-grained sequential scaling method guided by process verification. On top of its effectiveness, we further combine it with other classical parallel scaling methods at the step level, to introduce a novel inference paradigm called Hybrid Test-Time Scaling. Extensive experiments on five instruction-tuned LLMs across different scales (3B-14B) and families demonstrate that hybrid strategy incorporating various training-free TTS methods at a fine granularity has considerable potential for expanding the reasoning performance boundaries of LLMs.
Search
Fix author
Co-authors
- Kaiyan Chang 1
- Yuan Ge 1
- Chi Hu 1
- Xiaoqian Liu 1
- Yingfeng Luo 1
- show all...