Dongqi Fu
2026
How to Make LMs Strong Node Classifiers?
Zhe Xu | Kaveh Hassani | Si Zhang | Hanqing Zeng | Michihiro Yasunaga | Limei Wang | Dongqi Fu | Ning Yao | Bo Long | Hanghang Tong
Findings of the Association for Computational Linguistics: EACL 2026
Zhe Xu | Kaveh Hassani | Si Zhang | Hanqing Zeng | Michihiro Yasunaga | Limei Wang | Dongqi Fu | Ning Yao | Bo Long | Hanghang Tong
Findings of the Association for Computational Linguistics: EACL 2026
Language Models (LMs) are increasingly challenging the dominance of domain-specific models, such as Graph Neural Networks (GNNs) and Graph Transformers (GTs), in graph learning tasks. Following this trend, we propose a novel approach that empowers off-the-shelf LMs to achieve performance comparable to state-of-the-art (SOTA) GNNs on node classification tasks, without requiring any architectural modifications. By preserving the LM’s original architecture, our approach retains a key benefit of LM instruction tuning: the ability to jointly train on diverse datasets, fostering greater flexibility and efficiency. To achieve this, we introduce two key augmentation strategies: (1) Enriching LMs’ input using topological and semantic retrieval methods, which provide richer contextual information, and (2) guiding the LMs’ classification process through a lightweight GNN classifier that effectively prunes class candidates. Our experiments on real-world datasets show that backbone Flan-T5 LMs equipped with these augmentation strategies outperform SOTA text-output node classifiers and are comparable to top-performing vector-output node classifiers. By bridging the gap between specialized node classifiers and general LMs, this work paves the way for more versatile and widely applicable graph learning models. We will open-source the code upon publication.
2025
Can Graph Neural Networks Learn Language with Extremely Weak Text Supervision?
Zihao Li | Lecheng Zheng | Bowen Jin | Dongqi Fu | Baoyu Jing | Yikun Ban | Jingrui He | Jiawei Han
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Zihao Li | Lecheng Zheng | Bowen Jin | Dongqi Fu | Baoyu Jing | Yikun Ban | Jingrui He | Jiawei Han
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
While great success has been achieved in building vision models with Contrastive Language-Image Pre-training (CLIP) over Internet-scale image-text pairs, building transferable Graph Neural Networks (GNNs) with CLIP pipeline is challenging because of the scarcity of labeled data and text supervision, different levels of downstream tasks, and the conceptual gaps between domains. In this work, to address these issues, we propose a multi-modal prompt learning paradigm to effectively adapt pre-trained GNN to downstream tasks and data, given only a few semantically labeled samples, each with extremely weak text supervision. Our new paradigm embeds the graphs directly in the same space as the Large Language Models (LLMs) by learning both graph prompts and text prompts simultaneously. We demonstrate the superior performance of our paradigm in few-shot, multi-task-level, and cross-domain settings. Moreover, we build the first CLIP-style zero-shot classification prototype that can generalize GNNs to unseen classes with extremely weak text supervision.