Thinking Long, but Short: Stable Sequential Test-Time Scaling for Large Reasoning Models

Michael R. Metel, Yufei Cui, Boxing Chen, Prasanna Parthasarathi


Abstract
Sequential test-time scaling is a promising training-free method to improve large reasoning model accuracy, but as currently implemented, significant limitations have been observed. Inducing models to think for longer can increase their accuracy, but as the length of reasoning is further extended, it has also been shown to result in accuracy degradation and model instability. This work presents a novel sequential test-time scaling method, Min-Seek, which improves model accuracy significantly over a wide range of induced thoughts, stabilizing the accuracy of sequential scaling, and removing the need for reasoning length fine-tuning. Beyond improving model accuracy over a variety of reasoning tasks, our method is inherently efficient, as only the KV pairs of one additional induced thought are kept in the KV cache during reasoning. With a custom KV cache which stores keys without position embeddings, by dynamically encoding them contiguously before each new generated thought, our method can continue to reason well beyond a model’s maximum context length, and under mild conditions has linear computational complexity.
Anthology ID:
2026.findings-eacl.153
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2942–2951
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.153/
DOI:
Bibkey:
Cite (ACL):
Michael R. Metel, Yufei Cui, Boxing Chen, and Prasanna Parthasarathi. 2026. Thinking Long, but Short: Stable Sequential Test-Time Scaling for Large Reasoning Models. In Findings of the Association for Computational Linguistics: EACL 2026, pages 2942–2951, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Thinking Long, but Short: Stable Sequential Test-Time Scaling for Large Reasoning Models (Metel et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.153.pdf
Checklist:
 2026.findings-eacl.153.checklist.pdf