Qingjie Zhang
2025
Understanding the Dark Side of LLMs’ Intrinsic Self-Correction
Qingjie Zhang
|
Di Wang
|
Haoting Qian
|
Yiming Li
|
Tianwei Zhang
|
Minlie Huang
|
Ke Xu
|
Hewu Li
|
Liu Yan
|
Han Qiu
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Intrinsic self-correction was initially proposed to improve LLMs’ responses via feedback solely based on their inherent capability. However, recent works show that LLMs’ intrinsic self-correction fails without oracle labels as feedback. In this paper, our research goal is to *interpret LLMs’ intrinsic self-correction for different tasks, especially for those failure cases.* By including one simple task and three complex tasks with state-of-the-art (SOTA) LLMs like ChatGPT, Llama, and DeepSeek, we design three interpretation methods to reveal the dark side of LLMs’ intrinsic self-correction. We identify intrinsic self-correction can (1) cause LLMs to waver both intermedia and final answers and lead to prompt bias on simple factual questions; (2) introduce human-like cognitive bias on complex tasks. In light of our findings, we also provide two simple yet effective strategies for alleviation: question repeating and supervised fine-tuning with a few samples. We open-source our work at https://x-isc.info/.
Search
Fix author
Co-authors
- Minlie Huang 1
- Yiming Li 1
- Hewu Li 1
- Haoting Qian 1
- Han Qiu 1
- show all...
Venues
- acl1