Abstract
We study the challenge of learning causal reasoning over procedural text to answer “What if...” questions when external commonsense knowledge is required. We propose a novel multi-hop graph reasoning model to 1) efficiently extract a commonsense subgraph with the most relevant information from a large knowledge graph; 2) predict the causal answer by reasoning over the representations obtained from the commonsense subgraph and the contextual interactions between the questions and context. We evaluate our model on WIQA benchmark and achieve state-of-the-art performance compared to the recent models.- Anthology ID:
- 2022.findings-acl.152
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2022
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1927–1933
- Language:
- URL:
- https://aclanthology.org/2022.findings-acl.152
- DOI:
- 10.18653/v1/2022.findings-acl.152
- Cite (ACL):
- Chen Zheng and Parisa Kordjamshidi. 2022. Relevant CommonSense Subgraphs for “What if...” Procedural Reasoning. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1927–1933, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- Relevant CommonSense Subgraphs for “What if…” Procedural Reasoning (Zheng & Kordjamshidi, Findings 2022)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2022.findings-acl.152.pdf
- Data
- ConceptNet, WIQA