Sha Liu
2025
Too Consistent to Detect: A Study of Self-Consistent Errors in LLMs
Hexiang Tan
|
Fei Sun
|
Sha Liu
|
Du Su
|
Qi Cao
|
Xin Chen
|
Jingang Wang
|
Xunliang Cai
|
Yuanzhuo Wang
|
Huawei Shen
|
Xueqi Cheng
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
As large language models (LLMs) often generate plausible but incorrect content, error detection has become increasingly critical to ensure truthfulness.However, existing detection methods often overlook a critical problem we term as **self-consistent error**, where LLMs repeatedly generate the same incorrect response across multiple stochastic samples.This work formally defines self-consistent errors and evaluates mainstream detection methods on them.Our investigation reveals two key findings: (1) Unlike inconsistent errors, whose frequency diminishes significantly as the LLM scale increases, the frequency of self-consistent errors remains stable or even increases.(2) All four types of detection methods significantly struggle to detect self-consistent errors.These findings reveal critical limitations in current detection methods and underscore the need for improvement.Motivated by the observation that self-consistent errors often differ across LLMs, we propose a simple but effective cross‐model probe method that fuses hidden state evidence from an external verifier LLM.Our method significantly enhances performance on self-consistent errors across three LLM families.
Search
Fix author
Co-authors
- Xunliang Cai 1
- Qi Cao 1
- Xin Chen (陈鑫) 1
- Xueqi Cheng (程学旗) 1
- Huawei Shen (沈华伟) 1
- show all...