@inproceedings{wu-etal-2025-graph,
    title = "Graph-R1: Incentivizing the Zero-Shot Graph Learning Capability in {LLM}s via Explicit Reasoning",
    author = "Wu, Yicong  and
      Lu, Guangyue  and
      Zuo, Yuan  and
      Zhang, Huarong  and
      Wu, Junjie",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1220/",
    pages = "23920--23938",
    ISBN = "979-8-89176-332-6",
    abstract = "Generalizing to unseen graph tasks without task-specific supervision remains challenging. Graph Neural Networks (GNNs) are limited by fixed label spaces, while Large Language Models (LLMs) lack structural inductive biases. Recent advances in Large Reasoning Models (LRMs) provide a zero-shot alternative via explicit, long chain-of-thought reasoning. Inspired by this, we propose a GNN-free approach that reformulates graph tasks{---}node classification, link prediction, and graph classification{---}as textual reasoning problems solved by LRMs. We introduce the first datasets with detailed reasoning traces for these tasks and develop Graph-R1, a reinforcement learning framework that leverages task-specific rethink templates to guide reasoning over linearized graphs. Experiments demonstrate that Graph-R1 outperforms state-of-the-art baselines in zero-shot settings, producing interpretable and effective predictions. Our work highlights the promise of explicit reasoning for graph learning and provides new resources for future research. Codes are available at https://github.com/lgybuaa/Graph-R1."
}Markdown (Informal)
[Graph-R1: Incentivizing the Zero-Shot Graph Learning Capability in LLMs via Explicit Reasoning](https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1220/) (Wu et al., EMNLP 2025)
ACL