Guanbo Wang
2025
ConCISE: Confidence-guided Compression in Step-by-step Efficient Reasoning
Ziqing Qiao
|
Yongheng Deng
|
Jiali Zeng
|
Dong Wang
|
Lai Wei
|
Guanbo Wang
|
Fandong Meng
|
Jie Zhou
|
Ju Ren
|
Yaoxue Zhang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large Reasoning Models (LRMs) perform strongly in complex reasoning tasks via Chain-of-Thought (CoT) prompting, but often suffer from verbose outputs, increasing computational overhead. Existing fine-tuning-based compression methods either operate post-hoc pruning, risking disruption to reasoning coherence, or rely on sampling-based selection, which fails to remove redundant content thoroughly. To address these limitations, this work begins by framing two key patterns of redundant reflection in LRMs—Confidence Deficit, wherein the model reflects on correct intermediate steps, and Termination Delay, where reflection continues after a verified, confident answer—through a confidence-guided perspective. Based on this, we introduce ConCISE (Confidence-guided Compression In Step-by-step Efficient Reasoning), a framework designed to generate concise reasoning chains, integrating Confidence Injection to boost reasoning confidence, and Early Stopping to terminate reasoning when confidence is sufficient. Extensive experiments demonstrate that compared to baseline methods, fine-tuning LRMs on ConCISE-generated data yields a better balance between compression and task performance, reducing length by up to ~50% under SimPO, while maintaining high task accuracy.
Search
Fix author
Co-authors
- Yongheng Deng 1
- Fandong Meng 1
- Ziqing Qiao 1
- Ju Ren 1
- Dong Wang 1
- show all...