Enhancing Large Language Model for Knowledge Graph Completion via Structure-Aware Alignment-Tuning

Yu Liu, Yanan Cao, Xixun Lin, Yanmin Shang, Shi Wang, Shirui Pan


Abstract
Knowledge graph completion (KGC) aims to infer new knowledge and make predictions from knowledge graphs. Recently, large language models (LLMs) have exhibited remarkable reasoning capabilities. LLM-enhanced KGC methods primarily focus on designing task-specific instructions, achieving promising advancements. However, there are still two critical challenges. First, existing methods often ignore the inconsistent representation spaces between natural language and graph structures. Second, most approaches develop separate instructions for different KGC tasks, leading to duplicate works and time-consuming processes. To address these challenges, we propose SAT, a novel framework that enhances LLMs for KGC via structure-aware alignment-tuning. Specifically, we first introduce hierarchical knowledge alignment to align graph embeddings with the natural language space through multi-task contrastive learning. Then, we propose structural instruction tuning to guide LLMs in performing structure-aware reasoning over KGs, using a unified graph instruction combined with a lightweight knowledge adapter. Experimental results on two KGC tasks across four benchmark datasets demonstrate that SAT significantly outperforms state-of-the-art methods, especially in the link prediction task with improvements ranging from 8.7% to 29.8%
Anthology ID:
2025.emnlp-main.1061
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20981–20995
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1061/
DOI:
Bibkey:
Cite (ACL):
Yu Liu, Yanan Cao, Xixun Lin, Yanmin Shang, Shi Wang, and Shirui Pan. 2025. Enhancing Large Language Model for Knowledge Graph Completion via Structure-Aware Alignment-Tuning. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 20981–20995, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Enhancing Large Language Model for Knowledge Graph Completion via Structure-Aware Alignment-Tuning (Liu et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1061.pdf
Checklist:
 2025.emnlp-main.1061.checklist.pdf