Knowledge Editing Induces Underconfidence in Language Models

Ryo Hasegawa, Yusuke Sakai, Hidetaka Kamigaito, Taro Watanabe


Abstract
As language models continue to scale, the demand for knowledge editing, a retraining-free knowledge update method, has increased. However, since knowledge editing directly alters token prediction probabilities acquired during pretraining, the probabilities may diverge from the empirical distribution. In this study, we analyze the impact of knowledge editing to compare the alignment between token prediction probabilities and task accuracy by calculating confidence calibration before and after knowledge editing. Our results reveal that, for tasks requiring semantic understanding, the range of increase in token prediction probabilities tends to be smaller than that of accuracy improvement, suggesting that knowledge editing methods lead to less confidence in prediction.
Anthology ID:
2025.starsem-1.27
Volume:
Proceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025)
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Lea Frermann, Mark Stevenson
Venue:
*SEM
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
338–347
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.starsem-1.27/
DOI:
Bibkey:
Cite (ACL):
Ryo Hasegawa, Yusuke Sakai, Hidetaka Kamigaito, and Taro Watanabe. 2025. Knowledge Editing Induces Underconfidence in Language Models. In Proceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025), pages 338–347, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Knowledge Editing Induces Underconfidence in Language Models (Hasegawa et al., *SEM 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.starsem-1.27.pdf