ReGraph: Learning to Reformulate Graph Encodings with Large Language Models

Amir Hadifar, Christopher Ochs, Arjan Van Ewijk


Abstract
Large language models can rephrase and restructure natural language effectively, but their potential for reformulating graph encodings remains underexplored despite the significant impact of encoding choices on performance.In this work, we introduce ReGraph, a reinforcement learning-based approach that guides language models to reformulate graph encodings for improved task alignment.We demonstrate that reformulating graph encodings enhances reasoning and yields consistent performance gains on graph question answering tasks.Our results show that language models often prefer specific graph encodings, even if they are suboptimal for the task at hand.
Anthology ID:
2025.ijcnlp-short.32
Volume:
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Month:
December
Year:
2025
Address:
Mumbai, India
Editors:
Kentaro Inui, Sakriani Sakti, Haofen Wang, Derek F. Wong, Pushpak Bhattacharyya, Biplab Banerjee, Asif Ekbal, Tanmoy Chakraborty, Dhirendra Pratap Singh
Venues:
IJCNLP | AACL
SIG:
Publisher:
The Asian Federation of Natural Language Processing and The Association for Computational Linguistics
Note:
Pages:
386–394
Language:
URL:
https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.ijcnlp-short.32/
DOI:
Bibkey:
Cite (ACL):
Amir Hadifar, Christopher Ochs, and Arjan Van Ewijk. 2025. ReGraph: Learning to Reformulate Graph Encodings with Large Language Models. In Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics, pages 386–394, Mumbai, India. The Asian Federation of Natural Language Processing and The Association for Computational Linguistics.
Cite (Informal):
ReGraph: Learning to Reformulate Graph Encodings with Large Language Models (Hadifar et al., IJCNLP-AACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.ijcnlp-short.32.pdf