Kaiyan Chang
2025
Step-level Verifier-guided Hybrid Test-Time Scaling for Large Language Models
Kaiyan Chang
|
Yonghao Shi
|
Chenglong Wang
|
Hang Zhou
|
Chi Hu
|
Xiaoqian Liu
|
Yingfeng Luo
|
Yuan Ge
|
Tong Xiao
|
JingBo Zhu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Test-Time Scaling (TTS) is a promising approach to progressively elicit the model’s intelligence during inference. Recently, training-based TTS methods, such as continued reinforcement learning (RL), have further surged in popularity, while training-free TTS methods are gradually fading from prominence. However, the additional computation overhead of training amplifies the burden on test-time scaling.In this paper, we focus on training-free TTS methods for reasoning. We first design Conditional Step-level Self-refinement, a fine-grained sequential scaling method guided by process verification. On top of its effectiveness, we further combine it with other classical parallel scaling methods at the step level, to introduce a novel inference paradigm called Hybrid Test-Time Scaling. Extensive experiments on five instruction-tuned LLMs across different scales (3B-14B) and families demonstrate that hybrid strategy incorporating various training-free TTS methods at a fine granularity has considerable potential for expanding the reasoning performance boundaries of LLMs.
2024
Hybrid Alignment Training for Large Language Models
Chenglong Wang
|
Hang Zhou
|
Kaiyan Chang
|
Bei Li
|
Yongyu Mu
|
Tong Xiao
|
Tongran Liu
|
JingBo Zhu
Findings of the Association for Computational Linguistics: ACL 2024
Alignment training is crucial for enabling large language models (LLMs) to cater to human intentions and preferences. It is typically performed based on two stages with different objectives: instruction-following alignment and human-preference alignment. However, aligning LLMs with these objectives in sequence suffers from an inherent problem: the objectives may conflict, and the LLMs cannot guarantee to simultaneously align with the instructions and human preferences well. To response to these, in this work, we propose a Hybrid Alignment Training (Hbat) approach, based on alternating alignment and modified elastic weight consolidation methods. The basic idea is to alternate between different objectives during alignment training, so that better collaboration can be achieved between the two alignment tasks. We experiment with Hbat on summarization and dialogue tasks. Experimental results show that the proposed Hbat can significantly outperform all baselines. Notably, Hbat yields consistent performance gains over the traditional two-stage alignment training when using both proximal policy optimization and direct preference optimization.
Search
Fix author
Co-authors
- Chenglong Wang 2
- Tong Xiao (肖桐) 2
- Hang Zhou 2
- Jingbo Zhu (朱靖波) 2
- Yuan Ge 1
- show all...