ConKE: Conceptualization-Augmented Knowledge Editing in Large Language Models for Commonsense Reasoning

Liyu Zhang, Weiqi Wang, Tianqing Fang, Yangqiu Song


Abstract
Knowledge Editing (KE) aims to adjust a Large Language Model’s (LLM) internal representations and parameters to correct inaccuracies and improve output consistency without incurring the computational expense of re-training the entire model. However, editing commonsense knowledge still faces difficulties, including limited knowledge coverage in existing resources, the infeasibility of annotating labels for an overabundance of commonsense knowledge, and the strict knowledge formats of current editing methods. In this paper, we address these challenges by presenting ConceptEdit, a framework that integrates conceptualization and instantiation into the KE pipeline for LLMs to enhance their commonsense reasoning capabilities. ConceptEdit dynamically diagnoses implausible commonsense knowledge within an LLM using another verifier LLM and augments the source knowledge to be edited with conceptualization for stronger generalizability. Experimental results demonstrate that LLMs enhanced with ConceptEdit successfully generate commonsense knowledge with improved plausibility compared to other baselines and achieve stronger performance across multiple question answering benchmarks. Our data, code, and models are publicly available at https://github.com/HKUST-KnowComp/ConKE.
Anthology ID:
2025.findings-acl.35
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venues:
Findings | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
627–635
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.35/
DOI:
Bibkey:
Cite (ACL):
Liyu Zhang, Weiqi Wang, Tianqing Fang, and Yangqiu Song. 2025. ConKE: Conceptualization-Augmented Knowledge Editing in Large Language Models for Commonsense Reasoning. In Findings of the Association for Computational Linguistics: ACL 2025, pages 627–635, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
ConKE: Conceptualization-Augmented Knowledge Editing in Large Language Models for Commonsense Reasoning (Zhang et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.35.pdf