Bill Howe
2025
Know Your Limits: A Survey of Abstention in Large Language Models
Bingbing Wen
|
Jihan Yao
|
Shangbin Feng
|
Chenjun Xu
|
Yulia Tsvetkov
|
Bill Howe
|
Lucy Lu Wang
Transactions of the Association for Computational Linguistics, Volume 13
Abstention, the refusal of large language models (LLMs) to provide an answer, is increasingly recognized for its potential to mitigate hallucinations and enhance safety in LLM systems. In this survey, we introduce a framework to examine abstention from three perspectives: the query, the model, and human values. We organize the literature on abstention methods, benchmarks, and evaluation metrics using this framework, and discuss merits and limitations of prior work. We further identify and motivate areas for future research, such as whether abstention can be achieved as a meta-capability that transcends specific tasks or domains, and opportunities to optimize abstention abilities in specific contexts. In doing so, we aim to broaden the scope and impact of abstention methodologies in AI systems.1
2024
Characterizing LLM Abstention Behavior in Science QA with Context Perturbations
Bingbing Wen
|
Bill Howe
|
Lucy Lu Wang
Findings of the Association for Computational Linguistics: EMNLP 2024
The correct model response in the face of uncertainty is to abstain from answering a question so as not to mislead the user. In this work, we study the ability of LLMs to abstain from answering context-dependent science questions when provided insufficient or incorrect context. We probe model sensitivity in several settings: removing gold context, replacing gold context with irrelevant context, and providing additional context beyond what is given. In experiments on four QA datasets with six LLMs, we show that performance varies greatly across models, across the type of context provided, and also by question type; in particular, many LLMs seem unable to abstain from answering boolean questions using standard QA prompts. Our analysis also highlights the unexpected impact of abstention performance on QA task accuracy. Counter-intuitively, in some settings, replacing gold context with irrelevant context or adding irrelevant context to gold context can improve abstention performance in a way that results in improvements in task performance. Our results imply that changes are needed in QA dataset design and evaluation to more effectively assess the correctness and downstream impacts of model abstention.
Search
Fix data
Co-authors
- Lucy Lu Wang 2
- Bingbing Wen 2
- Shangbin Feng 1
- Yulia Tsvetkov 1
- Chenjun Xu 1
- show all...