Epistemology of Language Models: Do Language Models Have Holistic Knowledge?

Minsu Kim, James Thorne


Abstract
This paper investigates the inherent knowledge in language models from the perspective of epistemological holism. The purpose of this paper is to explore whether LLMs exhibit characteristics consistent with epistemological holism. These characteristics suggest that core knowledge, such as commonsense, general, and specific knowledge, each plays a specific role, serving as the foundation of our knowledge system and being difficult to revise. To assess these traits related to holism, we created a scientific reasoning dataset and examined the epistemology of language models through three tasks: Abduction, Revision, and Argument Generation. In the abduction task, the language models explained situations while avoiding revising the core knowledge. However, in other tasks, the language models were revealed not to distinguish between core and peripheral knowledge, showing an incomplete alignment with holistic knowledge principles.
Anthology ID:
2024.findings-acl.751
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12644–12669
Language:
URL:
https://aclanthology.org/2024.findings-acl.751
DOI:
10.18653/v1/2024.findings-acl.751
Bibkey:
Cite (ACL):
Minsu Kim and James Thorne. 2024. Epistemology of Language Models: Do Language Models Have Holistic Knowledge?. In Findings of the Association for Computational Linguistics ACL 2024, pages 12644–12669, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Epistemology of Language Models: Do Language Models Have Holistic Knowledge? (Kim & Thorne, Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2024.findings-acl.751.pdf