Line of Duty: Evaluating LLM Self-Knowledge via Consistency in Feasibility Boundaries

Sahil Kale, Vijaykant Nadadur


Abstract
As LLMs grow more powerful, their most profound achievement may be recognising when to say “I don’t know”. Existing studies on LLM self-knowledge have been largely constrained by human-defined notions of feasibility, often neglecting the reasons behind unanswerability by LLMs and failing to study deficient types of self-knowledge. This study aims to obtain intrinsic insights into different types of LLM self-knowledge with a novel methodology: allowing them the flexibility to set their own feasibility boundaries and then analysing the consistency of these limits. We find that even frontier models like GPT-4o and Mistral Large are not sure of their own capabilities more than 80% of the time, highlighting a significant lack of trustworthiness in responses. Our analysis of confidence balance in LLMs indicates that models swing between overconfidence and conservatism in feasibility boundaries depending on task categories and that the most significant self-knowledge weaknesses lie in temporal awareness and contextual understanding. These difficulties in contextual comprehension additionally lead models to question their operational boundaries, resulting in considerable confusion within the self-knowledge of LLMs. We make our code and results available publicly.
Anthology ID:
2025.trustnlp-main.10
Volume:
Proceedings of the 5th Workshop on Trustworthy NLP (TrustNLP 2025)
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Trista Cao, Anubrata Das, Tharindu Kumarage, Yixin Wan, Satyapriya Krishna, Ninareh Mehrabi, Jwala Dhamala, Anil Ramakrishna, Aram Galystan, Anoop Kumar, Rahul Gupta, Kai-Wei Chang
Venues:
TrustNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
127–140
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.trustnlp-main.10/
DOI:
Bibkey:
Cite (ACL):
Sahil Kale and Vijaykant Nadadur. 2025. Line of Duty: Evaluating LLM Self-Knowledge via Consistency in Feasibility Boundaries. In Proceedings of the 5th Workshop on Trustworthy NLP (TrustNLP 2025), pages 127–140, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Line of Duty: Evaluating LLM Self-Knowledge via Consistency in Feasibility Boundaries (Kale & Nadadur, TrustNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.trustnlp-main.10.pdf