Ziyuan Zhuang
2024
Call Me When Necessary: LLMs can Efficiently and Faithfully Reason over Structured Environments
Sitao Cheng
|
Ziyuan Zhuang
|
Yong Xu
|
Fangkai Yang
|
Chaoyun Zhang
|
Xiaoting Qin
|
Xiang Huang
|
Ling Chen
|
Qingwei Lin
|
Dongmei Zhang
|
Saravan Rajmohan
|
Qi Zhang
Findings of the Association for Computational Linguistics ACL 2024
Large Language Models (LLMs) have shown potential in reasoning over structured environments, e.g., knowledge graphs and tables. Such tasks typically require multi-hop reasoning, i.e., match natural language utterance with instances in the environment. Previous works adopt LLMs to incrementally build a reasoning path, where LLMs either invoke tools or pick up items by step-by-step interacting with the environment. We propose Reasoning-Path-Editing (Readi), a novel framework where LLMs can efficiently and faithfully reason over structured environments. In Readi, LLMs initially generate a reasoning path given a query, and edit the path only when necessary. We instantiate the path on structured environments and provide feedback to edit the path if anything goes wrong. Experimental results on three KGQA and two TableQA datasets show the effectiveness of Readi, significantly surpassing previous LLM-based methods (by 9.1% Hit@1 on WebQSP, 12.4% on MQA-3H and 9.5% on WTQ), comparable with state-of-the-art fine-tuned methods (67% on CWQ and 74.7% on WebQSP) and substantially boosting the vanilla LLMs (by 14.9% on CWQ). Our code will be available on https://aka.ms/readi.
Search
Co-authors
- Sitao Cheng 1
- Yong Xu 1
- Fangkai Yang 1
- Chaoyun Zhang 1
- Xiaoting Qin 1
- show all...