Collaborative Beam Search: Enhancing LLM Reasoning via Collective Consensus

Yangyifan Xu, Shuo Ren, Jiajun Zhang


Abstract
Complex multi-step reasoning remains challenging for large language models (LLMs). While parallel inference-time scaling methods, such as step-level beam search, offer a promising solution, existing approaches typically depend on either domain-specific external verifiers, or self-evaluation which is brittle and prompt-sensitive. To address these issues, we propose Collaborative Beam Search (CBS), an iterative framework that harnesses the collective intelligence of multiple LLMs across both generation and verification stages. For generation, CBS leverages multiple LLMs to explore a broader search space, resulting in more diverse candidate steps. For verifications, CBS employs a perplexity-based collective consensus among these models, eliminating reliance on an external verifier or complex prompts. Between iterations, CBS leverages a dynamic quota allocation strategy that reassigns generation budget based on each model’s past performance, striking a balance between candidate diversity and quality. Experimental results on six tasks across arithmetic, logical, and commonsense reasoning show that CBS outperforms single‐model scaling and multi-model ensemble baselines by over 4 percentage points in average accuracy, demonstrating its effectiveness and general applicability.
Anthology ID:
2025.emnlp-main.574
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11409–11421
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.574/
DOI:
Bibkey:
Cite (ACL):
Yangyifan Xu, Shuo Ren, and Jiajun Zhang. 2025. Collaborative Beam Search: Enhancing LLM Reasoning via Collective Consensus. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 11409–11421, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Collaborative Beam Search: Enhancing LLM Reasoning via Collective Consensus (Xu et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.574.pdf
Checklist:
 2025.emnlp-main.574.checklist.pdf