Confidence-guided Refinement Reasoning for Zero-shot Question Answering
Youwon Jang, Woo Suk Choi, Minjoon Jung, Minsu Lee, Byoung-Tak Zhang
Abstract
We propose Confidence-guided Refinement Reasoning (C2R), a novel training-free framework applicable to question-answering (QA) tasks across text, image, and video domains. C2R strategically constructs and refines sub-questions and their answers (sub-QAs), deriving a better confidence score for the target answer. C2R first curates a subset of sub-QAs to explore diverse reasoning paths, then compares the confidence scores of the resulting answer candidates to select the most reliable final answer. Since C2R relies solely on confidence scores derived from the model itself, it can be seamlessly integrated with various existing QA models, demonstrating consistent performance improvements across diverse models and benchmarks. Furthermore, we provide essential yet underexplored insights into how leveraging sub-QAs affects model behavior, specifically analyzing the impact of both the quantity and quality of sub-QAs on achieving robust and reliable reasoning.- Anthology ID:
- 2025.emnlp-main.354
- Volume:
- Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
- Month:
- November
- Year:
- 2025
- Address:
- Suzhou, China
- Editors:
- Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 6944–6961
- Language:
- URL:
- https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.354/
- DOI:
- Cite (ACL):
- Youwon Jang, Woo Suk Choi, Minjoon Jung, Minsu Lee, and Byoung-Tak Zhang. 2025. Confidence-guided Refinement Reasoning for Zero-shot Question Answering. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 6944–6961, Suzhou, China. Association for Computational Linguistics.
- Cite (Informal):
- Confidence-guided Refinement Reasoning for Zero-shot Question Answering (Jang et al., EMNLP 2025)
- PDF:
- https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.354.pdf