Zhengyan Shi
2025
Ambiguity Detection and Uncertainty Calibration for Question Answering with Large Language Models
Zhengyan Shi
|
Giuseppe Castellucci
|
Simone Filice
|
Saar Kuzi
|
Elad Kravi
|
Eugene Agichtein
|
Oleg Rokhlenko
|
Shervin Malmasi
Proceedings of the 5th Workshop on Trustworthy NLP (TrustNLP 2025)
Large Language Models (LLMs) have demonstrated excellent capabilities in Question Answering (QA) tasks, yet their ability to identify and address ambiguous questions remains underdeveloped. Ambiguities in user queries often lead to inaccurate or misleading answers, undermining user trust in these systems. Despite prior attempts using prompt-based methods, performance has largely been equivalent to random guessing, leaving a significant gap in effective ambiguity detection. To address this, we propose a novel framework for detecting ambiguous questions within LLM-based QA systems. We first prompt an LLM to generate multiple answers to a question, and then analyze them to infer the ambiguity. We propose to use a lightweight Random Forest model, trained on a bootstrapped and shuffled 6-shot examples dataset. Experimental results on ASQA, PACIFIC, and ABG-COQA datasets demonstrate the effectiveness of our approach, with accuracy up to 70.8%. Furthermore, our framework enhances the confidence calibration of LLM outputs, leading to more trustworthy QA systems able to handle complex questions.
2024
SummEQuAL: Summarization Evaluation via Question Answering using Large Language Models
Junyuan Liu
|
Zhengyan Shi
|
Aldo Lipani
Proceedings of the 2nd Workshop on Natural Language Reasoning and Structured Explanations (@ACL 2024)
Summarization is hard to evaluate due to its diverse and abstract nature. Although N-gram-based metrics like BLEU and ROUGE are prevalent, they often do not align well with human evaluations. While model-based alternatives such as BERTScore improve, they typically require extensive labelled data. The advent of Large Language Models (LLMs) presents a promising avenue for evaluation. To this end, we introduce SummEQuAL, a novel content-based framework using LLMs for unified, reproducible summarization evaluation. SummEQuAL evaluates summaries by comparing their content with the source document, employing a question-answering approach to gauge both recall and precision. To validate SummEQuAL’s effectiveness, we develop a dataset based on MultiWOZ. We conduct experiments on SummEval and our MultiWOZ-based dataset, showing that SummEQuAL largely improves the quality of summarization evaluation. Notably, SummEQuAL demonstrates a 19.7% improvement over QuestEval in terms of sample-level Pearson correlation with human assessments of consistency on the SummEval dataset. Furthermore, it exceeds the performance of the BERTScore baseline by achieving a 17.3% increase in Spearman correlation on our MultiWOZ-based dataset. Our study illuminates the potential of LLMs for a unified evaluation framework, setting a new paradigm for future summarization evaluation.
Search
Fix data
Co-authors
- Eugene Agichtein 1
- Giuseppe Castellucci 1
- Simone Filice 1
- Elad Kravi 1
- Saar Kuzi 1
- show all...