Ambiguity Detection and Uncertainty Calibration for Question Answering with Large Language Models
Zhengyan Shi, Giuseppe Castellucci, Simone Filice, Saar Kuzi, Elad Kravi, Eugene Agichtein, Oleg Rokhlenko, Shervin Malmasi
Abstract
Large Language Models (LLMs) have demonstrated excellent capabilities in Question Answering (QA) tasks, yet their ability to identify and address ambiguous questions remains underdeveloped. Ambiguities in user queries often lead to inaccurate or misleading answers, undermining user trust in these systems. Despite prior attempts using prompt-based methods, performance has largely been equivalent to random guessing, leaving a significant gap in effective ambiguity detection. To address this, we propose a novel framework for detecting ambiguous questions within LLM-based QA systems. We first prompt an LLM to generate multiple answers to a question, and then analyze them to infer the ambiguity. We propose to use a lightweight Random Forest model, trained on a bootstrapped and shuffled 6-shot examples dataset. Experimental results on ASQA, PACIFIC, and ABG-COQA datasets demonstrate the effectiveness of our approach, with accuracy up to 70.8%. Furthermore, our framework enhances the confidence calibration of LLM outputs, leading to more trustworthy QA systems able to handle complex questions.- Anthology ID:
- 2025.trustnlp-main.4
- Volume:
- Proceedings of the 5th Workshop on Trustworthy NLP (TrustNLP 2025)
- Month:
- May
- Year:
- 2025
- Address:
- Albuquerque, New Mexico
- Editors:
- Trista Cao, Anubrata Das, Tharindu Kumarage, Yixin Wan, Satyapriya Krishna, Ninareh Mehrabi, Jwala Dhamala, Anil Ramakrishna, Aram Galystan, Anoop Kumar, Rahul Gupta, Kai-Wei Chang
- Venues:
- TrustNLP | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 41–55
- Language:
- URL:
- https://preview.aclanthology.org/moar-dois/2025.trustnlp-main.4/
- DOI:
- 10.18653/v1/2025.trustnlp-main.4
- Cite (ACL):
- Zhengyan Shi, Giuseppe Castellucci, Simone Filice, Saar Kuzi, Elad Kravi, Eugene Agichtein, Oleg Rokhlenko, and Shervin Malmasi. 2025. Ambiguity Detection and Uncertainty Calibration for Question Answering with Large Language Models. In Proceedings of the 5th Workshop on Trustworthy NLP (TrustNLP 2025), pages 41–55, Albuquerque, New Mexico. Association for Computational Linguistics.
- Cite (Informal):
- Ambiguity Detection and Uncertainty Calibration for Question Answering with Large Language Models (Shi et al., TrustNLP 2025)
- PDF:
- https://preview.aclanthology.org/moar-dois/2025.trustnlp-main.4.pdf