Abstract
In this paper, we introduce and justify a new task—causal link extraction based on beliefs—and do a qualitative analysis of the ability of a large language model—InstructGPT-3—to generate implicit consequences of beliefs. With the language model-generated consequences being promising, but not consistent, we propose directions of future work, including data collection, explicit consequence extraction using rule-based and language modeling-based approaches, and using explicitly stated consequences of beliefs to fine-tune or prompt the language model to produce outputs suitable for the task.- Anthology ID:
- 2022.insights-1.22
- Volume:
- Proceedings of the Third Workshop on Insights from Negative Results in NLP
- Month:
- May
- Year:
- 2022
- Address:
- Dublin, Ireland
- Venue:
- insights
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 159–164
- Language:
- URL:
- https://aclanthology.org/2022.insights-1.22
- DOI:
- 10.18653/v1/2022.insights-1.22
- Cite (ACL):
- Maria Alexeeva, Allegra A. Beal Cohen, and Mihai Surdeanu. 2022. Combining Extraction and Generation for Constructing Belief-Consequence Causal Links. In Proceedings of the Third Workshop on Insights from Negative Results in NLP, pages 159–164, Dublin, Ireland. Association for Computational Linguistics.
- Cite (Informal):
- Combining Extraction and Generation for Constructing Belief-Consequence Causal Links (Alexeeva et al., insights 2022)
- PDF:
- https://preview.aclanthology.org/remove-xml-comments/2022.insights-1.22.pdf