Guangze Gao
2025
D-RAG: Differentiable Retrieval-Augmented Generation for Knowledge Graph Question Answering
Guangze Gao
|
Zixuan Li
|
Chunfeng Yuan
|
Jiawei Li
|
Wu Jianzhuo
|
Yuehao Zhang
|
Xiaolong Jin
|
Bing Li
|
Weiming Hu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Knowledge Graph Question Answering (KGQA) aims to answer natural language questions based on knowledge graphs.Recent approaches apply the Retrieval-Augmented Generation (RAG) paradigm to incorporate Large Language Models (LLMs) to this task, where a retriever selects a question-related subgraph and an LLM-based generator is then adopted to predict answers based on the retrieved subgraph. However, the subgraph selection process is non-differentiable, preventing end-to-end training of the retriever and the generator in these approaches, which leads to sub-optimal performance. To overcome this limitation, this paper proposes a Differentiable RAG (D-RAG) approach that jointly optimizes the retriever and the generator for KGQA. Via reformulating the optimization objective as an expectation over a subgraph distribution with respect to answer generation likelihood, D-RAG makes the joint optimization feasible. Specifically, it implements this joint optimization through a differentiable subgraph sampling and prompting module that integrates Gumbel-Softmax reparameterization for sampling and a neural prompt construction process that fuses semantic and structural information. Experimental results on WebQSP and CWQ demonstrate that D-RAG outperforms state-of-the-art approaches.
Search
Fix author
Co-authors
- Weiming Hu 1
- Wu Jianzhuo 1
- Xiaolong Jin 1
- Zixuan Li 1
- Jiawei Li 1
- show all...