War of Thoughts: Competition Stimulates Stronger Reasoning in Large Language Models

Yibin Chen, Jinyi Liu, Yan Zheng, Yifu Yuan, Jianye Hao


Abstract
Recent advances in Large Language Models (LLMs) have reshaped the landscape of reasoning tasks, particularly through test-time scaling (TTS) to enhance LLM reasoning. Prior research has used structures such as trees or graphs to guide LLMs in searching for optimal solutions. These methods are time-consuming and require a strong reward model (RM) to support effective solution space exploration. Tournament-style approaches eliminate the reliance on RMs through comparative evaluation but suffer from transitivity dilemmas, leading to unstable ordering. To address these issues, we propose War of Thoughts (**WoT**), a novel post-hoc method that enhances reasoning without finetuning. WoT comprises two distinct stages: (1) *Exploration*, in which diverse and meaningful candidate solutions are generated through contrastive demonstrations and multi-granularity reasoning specifications; and (2) *Competition*, where these candidate solutions are subjected to multiple rounds of matchups within a competitive arena. Throughout this iterative process, the solutions are optimized and improved, with the optimal solution being determined based on Elo ratings. Extensive experiments across various LLMs demonstrate the superiority of WoT, surpassing baselines by **10–30%**. WoT can effectively stimulate stronger reasoning abilities, achieving impressive TTS performance in both generation budget and model size. It shows higher scalability efficiency compared to the baseline within the same budget. Notably, WoT exhibits excellent scalability with model size, even outperforming a 72B model despite using a 7B model.
Anthology ID:
2025.findings-acl.1118
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21716–21737
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-acl.1118/
DOI:
Bibkey:
Cite (ACL):
Yibin Chen, Jinyi Liu, Yan Zheng, Yifu Yuan, and Jianye Hao. 2025. War of Thoughts: Competition Stimulates Stronger Reasoning in Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2025, pages 21716–21737, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
War of Thoughts: Competition Stimulates Stronger Reasoning in Large Language Models (Chen et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-acl.1118.pdf