Ranked Voting based Self-Consistency of Large Language Models

Weiqin Wang, Yile Wang, Hui Huang


Abstract
Majority voting is considered an effective method to enhance chain-of-thought reasoning, as it selects the answer with the highest ”self-consistency” among different reasoning paths (Wang et al., 2023). However, previous chain-of-thought reasoning methods typically generate only a single answer in each trial, thereby ignoring the possibility of other potential answers. As a result, these alternative answers are often overlooked in subsequent voting processes. In this work, we propose to generate ranked answers in each reasoning process and conduct ranked voting among multiple ranked answers from different responses, thereby making the overall self-consistency more reliable. Specifically, we use three ranked voting methods: Instant-runoff voting, Borda count voting, and mean reciprocal rank voting. We validate our methods on six datasets, including three multiple-choice and three open-ended question-answering tasks, using both advanced open-source and closed-source large language models. Extensive experimental results indicate that our proposed method outperforms the baselines, showcasing the potential of leveraging the information of ranked answers and using ranked voting to improve reasoning performance. Code and logs will be released.
Anthology ID:
2025.findings-acl.744
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14410–14426
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.744/
DOI:
Bibkey:
Cite (ACL):
Weiqin Wang, Yile Wang, and Hui Huang. 2025. Ranked Voting based Self-Consistency of Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2025, pages 14410–14426, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Ranked Voting based Self-Consistency of Large Language Models (Wang et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.744.pdf