Editing Conceptual Knowledge for Large Language Models

Xiaohan Wang, Shengyu Mao, Shumin Deng, Yunzhi Yao, Yue Shen, Lei Liang, Jinjie Gu, Huajun Chen, Ningyu Zhang


Abstract
Recently, there has been a growing interest in knowledge editing for Large Language Models (LLMs). Current approaches and evaluations merely explore the instance-level editing, while whether LLMs possess the capability to modify concepts remains unclear. This paper pioneers the investigation of editing conceptual knowledge for LLMs, by constructing a novel benchmark dataset ConceptEdit and establishing a suite of new metrics for evaluation. The experimental results reveal that, although existing editing methods can efficiently modify concept-level definition to some extent, they also have the potential to distort the related instantial knowledge in LLMs, leading to poor performance. We anticipate this work can inspire further progress in understanding LLMs.
Anthology ID:
2024.findings-emnlp.40
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
706–724
Language:
URL:
https://preview.aclanthology.org/ingest_wac_2008/2024.findings-emnlp.40/
DOI:
10.18653/v1/2024.findings-emnlp.40
Bibkey:
Cite (ACL):
Xiaohan Wang, Shengyu Mao, Shumin Deng, Yunzhi Yao, Yue Shen, Lei Liang, Jinjie Gu, Huajun Chen, and Ningyu Zhang. 2024. Editing Conceptual Knowledge for Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 706–724, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Editing Conceptual Knowledge for Large Language Models (Wang et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest_wac_2008/2024.findings-emnlp.40.pdf