AbductionRules: Training Transformers to Explain Unexpected Inputs
Nathan Young, Qiming Bao, Joshua Bensemann, Michael Witbrock
Abstract
Transformers have recently been shown to be capable of reliably performing logical reasoning over facts and rules expressed in natural language, but abductive reasoning - inference to the best explanation of an unexpected observation - has been underexplored despite significant applications to scientific discovery, common-sense reasoning, and model interpretability.This paper presents AbductionRules, a group of natural language datasets designed to train and test generalisable abduction over natural-language knowledge bases.We use these datasets to finetune pretrained Transformers and discuss their performance, finding that our models learned generalisable abductive techniques but also learned to exploit the structure of our data.Finally, we discuss the viability of this approach to abductive reasoning and ways in which it may be improved in future work.- Anthology ID:
- 2022.findings-acl.19
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2022
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 218–227
- Language:
- URL:
- https://aclanthology.org/2022.findings-acl.19
- DOI:
- 10.18653/v1/2022.findings-acl.19
- Cite (ACL):
- Nathan Young, Qiming Bao, Joshua Bensemann, and Michael Witbrock. 2022. AbductionRules: Training Transformers to Explain Unexpected Inputs. In Findings of the Association for Computational Linguistics: ACL 2022, pages 218–227, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- AbductionRules: Training Transformers to Explain Unexpected Inputs (Young et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2022.findings-acl.19.pdf
- Code
- strong-ai-lab/abductionrules
- Data
- ProofWriter