Chenlong Wang
2025
Wait, We Don’t Need to “Wait”! Removing Thinking Tokens Improves Reasoning Efficiency
Chenlong Wang
|
Yuanning Feng
|
Dongping Chen
|
Zhaoyang Chu
|
Ranjay Krishna
|
Tianyi Zhou
Findings of the Association for Computational Linguistics: EMNLP 2025
Recent advances in large reasoning models have enabled complex, step-by-step reasoning but often introduce significant overthinking, resulting in verbose and redundant outputs that hinder efficiency. In this study, we examine whether explicit self-reflection, signaled by tokens such as “Wait” and “Hmm”, is necessary for advanced reasoning. We propose NoWait, a simple yet effective approach that disables explicit self-reflection by suppressing these tokens during inference. Extensive experiments on ten benchmarks across textual, visual, and video reasoning tasks show that NoWait reduces chain-of-thought trajectory length by up to 27%–51% in five R1-style model series, without compromising model utility. NoWait thus offers a plug-and-play solution for efficient and utility-preserving multimodal reasoning.