‘Hello, World!’: Making GNNs Talk with LLMs

Sunwoo Kim, Soo Yong Lee, Jaemin Yoo, Kijung Shin


Abstract
While graph neural networks (GNNs) have shown remarkable performance across diverse graph-related tasks, their high-dimensional hidden representations render them black boxes. In this work, we propose Graph Lingual Network (GLN), a GNN built on large language models (LLMs), with hidden representations in the form of human-readable text. Through careful prompt design, GLN incorporates not only the message passing module of GNNs but also advanced GNN techniques, including graph attention and initial residual connection. The comprehensibility of GLN’s hidden representations enables an intuitive analysis of how node representations change (1) across layers and (2) under advanced GNN techniques, shedding light on the inner workings of GNNs. Furthermore, we demonstrate that GLN achieves strong zero-shot performance on node classification and link prediction, outperforming existing LLM-based baseline methods.
Anthology ID:
2025.findings-emnlp.555
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10508–10526
Language:
URL:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.555/
DOI:
10.18653/v1/2025.findings-emnlp.555
Bibkey:
Cite (ACL):
Sunwoo Kim, Soo Yong Lee, Jaemin Yoo, and Kijung Shin. 2025. ‘Hello, World!’: Making GNNs Talk with LLMs. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 10508–10526, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
‘Hello, World!’: Making GNNs Talk with LLMs (Kim et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/author-page-yu-wang-polytechnic/2025.findings-emnlp.555.pdf
Checklist:
 2025.findings-emnlp.555.checklist.pdf