KG-GPT: A General Framework for Reasoning on Knowledge Graphs Using Large Language Models

Jiho Kim, Yeonsu Kwon, Yohan Jo, Edward Choi


Abstract
While large language models (LLMs) have made considerable advancements in understanding and generating unstructured text, their application in structured data remains underexplored. Particularly, using LLMs for complex reasoning tasks on knowledge graphs (KGs) remains largely untouched. To address this, we propose KG-GPT, a multi-purpose framework leveraging LLMs for tasks employing KGs. KG-GPT comprises three steps: Sentence Segmentation, Graph Retrieval, and Inference, each aimed at partitioning sentences, retrieving relevant graph components, and deriving logical conclusions, respectively. We evaluate KG-GPT using KG-based fact verification and KGQA benchmarks, with the model showing competitive and robust performance, even outperforming several fully-supervised models. Our work, therefore, marks a significant step in unifying structured and unstructured data processing within the realm of LLMs.
Anthology ID:
2023.findings-emnlp.631
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9410–9421
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.631
DOI:
10.18653/v1/2023.findings-emnlp.631
Bibkey:
Cite (ACL):
Jiho Kim, Yeonsu Kwon, Yohan Jo, and Edward Choi. 2023. KG-GPT: A General Framework for Reasoning on Knowledge Graphs Using Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 9410–9421, Singapore. Association for Computational Linguistics.
Cite (Informal):
KG-GPT: A General Framework for Reasoning on Knowledge Graphs Using Large Language Models (Kim et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2023.findings-emnlp.631.pdf