Judge and Improve: Towards a Better Reasoning of Knowledge Graphs with Large Language Models
Mo Zhiqiang, Yang Hua, Jiahui Li, Yuan Liu, Shawn Wong, Jianmin Huang
Abstract
Graph Neural Networks (GNNs) have shown immense potential in improving the performance of large-scale models by effectively incorporating structured relational information. However, current approaches face two key challenges: (1) achieving robust semantic alignment between graph representations and large models, and (2) ensuring interpretability in the generated outputs. To address these challenges, we propose ExGLM (Explainable Graph Language Model), a novel training framework designed to seamlessly integrate graph and language modalities while enhancing transparency. Our framework introduces two core components: (1) a graph-language synergistic alignment module, which aligns graph structures with language model to ensure semantic consistency across modalities; and (2) a judge-and-improve paradigm, which allows the language model to iteratively evaluate, refine, and prioritize responses with higher interpretability, thereby improving both performance and transparency. Extensive experiments conducted on three benchmark datasets—ogbn-arxiv, Cora, and PubMed—demonstrate that ExGLM not only surpasses existing methods in efficiency but also generates outputs that are significantly more interpretable, effectively addressing the primary limitations of current approaches.- Anthology ID:
- 2025.emnlp-main.269
- Volume:
- Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
- Month:
- November
- Year:
- 2025
- Address:
- Suzhou, China
- Editors:
- Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 5303–5320
- Language:
- URL:
- https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.269/
- DOI:
- Cite (ACL):
- Mo Zhiqiang, Yang Hua, Jiahui Li, Yuan Liu, Shawn Wong, and Jianmin Huang. 2025. Judge and Improve: Towards a Better Reasoning of Knowledge Graphs with Large Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 5303–5320, Suzhou, China. Association for Computational Linguistics.
- Cite (Informal):
- Judge and Improve: Towards a Better Reasoning of Knowledge Graphs with Large Language Models (Zhiqiang et al., EMNLP 2025)
- PDF:
- https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.269.pdf