Injecting Structured Knowledge into LLMs via Graph Neural Networks

Zichao Li, Zong Ke, Puning Zhao


Abstract
Large language models (LLMs) have achieved remarkable success in natural language processing (NLP), but they often struggle to capture explicit linguistic structures and world knowledge. To address this limitation, we propose a hybrid model that integrates LLMs with graph neural networks (GNNs) to inject structured knowledge into NLP tasks. Our approach leverages the strengths of both components: LLMs provide rich contextual representations, while GNNs encode explicit structural priors from sources such as dependency trees, Abstract Meaning Representations (AMRs), and knowledge graphs. We evaluate the hybrid model on a diverse set of tasks, including semantic parsing, multi-hop question answering, text summarization, commonsense reasoning, and dependency parsing. Experimental results demonstrate consistent improvements over both standalone baselines and state-of-the-art methods, with relative gains of up to 2.3% in Exact Match scores for multi-hop QA and 1.7% in accuracy for commonsense reasoning. Ablation studies and sensitivity analyses further highlight the importance of balancing contextual and structural information. By bridging the gap between unstructured textual data and structured knowledge, our work advances the state of the art in NLP and paves the way for more interpretable and robust language models.
Anthology ID:
2025.xllm-1.3
Volume:
Proceedings of the 1st Joint Workshop on Large Language Models and Structure Modeling (XLLM 2025)
Month:
August
Year:
2025
Address:
Vienna, Austria
Editors:
Hao Fei, Kewei Tu, Yuhui Zhang, Xiang Hu, Wenjuan Han, Zixia Jia, Zilong Zheng, Yixin Cao, Meishan Zhang, Wei Lu, N. Siddharth, Lilja Øvrelid, Nianwen Xue, Yue Zhang
Venues:
XLLM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16–25
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.xllm-1.3/
DOI:
Bibkey:
Cite (ACL):
Zichao Li, Zong Ke, and Puning Zhao. 2025. Injecting Structured Knowledge into LLMs via Graph Neural Networks. In Proceedings of the 1st Joint Workshop on Large Language Models and Structure Modeling (XLLM 2025), pages 16–25, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Injecting Structured Knowledge into LLMs via Graph Neural Networks (Li et al., XLLM 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.xllm-1.3.pdf