Tianhang Zhang
2023
Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus
Tianhang Zhang
|
Lin Qiu
|
Qipeng Guo
|
Cheng Deng
|
Yue Zhang
|
Zheng Zhang
|
Chenghu Zhou
|
Xinbing Wang
|
Luoyi Fu
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Large Language Models (LLMs) have gained significant popularity for their impressive performance across diverse fields. However, LLMs are prone to hallucinate untruthful or nonsensical outputs that fail to meet user expectations in many real-world applications. Existing works for detecting hallucinations in LLMs either rely on external knowledge for reference retrieval or require sampling multiple responses from the LLM for consistency verification, making these methods costly and inefficient. In this paper, we propose a novel reference-free, uncertainty-based method for detecting hallucinations in LLMs. Our approach imitates human focus in factuality checking from three aspects: 1) focus on the most informative and important keywords in the given text; 2) focus on the unreliable tokens in historical context which may lead to a cascade of hallucinations; and 3) focus on the token properties such as token type and token frequency. Experimental results on relevant datasets demonstrate the effectiveness of our proposed method, which achieves state-of-the-art performance across all the evaluation metrics and eliminates the need for additional information.
Search
Co-authors
- Lin Qiu 1
- Qipeng Guo 1
- Cheng Deng 1
- Yue Zhang 1
- Zheng Zhang 1
- show all...