Interpretable Proof Generation via Iterative Backward Reasoning

Hanhao Qu, Yu Cao, Jun Gao, Liang Ding, Ruifeng Xu


Abstract
We present IBR, an Iterative Backward Reasoning model to solve the proof generation tasks on rule-based Question Answering (QA), where models are required to reason over a series of textual rules and facts to find out the related proof path and derive the final answer. We handle the limitations of existed works in two folds: 1) enhance the interpretability of reasoning procedures with detailed tracking, by predicting nodes and edges in the proof path iteratively backward from the question; 2) promote the efficiency and accuracy via reasoning on the elaborate representations of nodes and history paths, without any intermediate texts that may introduce external noise during proof generation. There are three main modules in IBR, QA and proof strategy prediction to obtain the answer and offer guidance for the following procedure; parent node prediction to determine a node in the existing proof that a new child node will link to; child node prediction to find out which new node will be added to the proof. Experiments on both synthetic and paraphrased datasets demonstrate that IBR has better in-domain performance as well as cross-domain transferability than several strong baselines. Our code and models are available at https://github. com/find-knowledge/IBR.
Anthology ID:
2022.naacl-main.216
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2968–2981
Language:
URL:
https://aclanthology.org/2022.naacl-main.216
DOI:
10.18653/v1/2022.naacl-main.216
Bibkey:
Cite (ACL):
Hanhao Qu, Yu Cao, Jun Gao, Liang Ding, and Ruifeng Xu. 2022. Interpretable Proof Generation via Iterative Backward Reasoning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2968–2981, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Interpretable Proof Generation via Iterative Backward Reasoning (Qu et al., NAACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.naacl-main.216.pdf
Video:
 https://preview.aclanthology.org/ingestion-script-update/2022.naacl-main.216.mp4
Code
 find-knowledge/ibr
Data
ProofWriter