Abstract
In this paper we explore the problem of machine reading comprehension, focusing on the BoolQ dataset of Yes/No questions. We carry out an error analysis of a BERT-based machine reading comprehension model on this dataset, revealing issues such as unstable model behaviour and some noise within the dataset itself. We then experiment with two approaches for integrating information from knowledge graphs: (i) concatenating knowledge graph triples to text passages and (ii) encoding knowledge with a Graph Neural Network. Neither of these approaches show a clear improvement and we hypothesize that this may be due to a combination of inaccuracies in the knowledge graph, imprecision in entity linking, and the models’ inability to capture additional information from knowledge graphs.- Anthology ID:
- 2020.insights-1.2
- Volume:
- Proceedings of the First Workshop on Insights from Negative Results in NLP
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Venue:
- insights
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 6–14
- Language:
- URL:
- https://aclanthology.org/2020.insights-1.2
- DOI:
- 10.18653/v1/2020.insights-1.2
- Cite (ACL):
- Daria Dzendzik, Carl Vogel, and Jennifer Foster. 2020. Q. Can Knowledge Graphs be used to Answer Boolean Questions? A. It’s complicated!. In Proceedings of the First Workshop on Insights from Negative Results in NLP, pages 6–14, Online. Association for Computational Linguistics.
- Cite (Informal):
- Q. Can Knowledge Graphs be used to Answer Boolean Questions? A. It’s complicated! (Dzendzik et al., insights 2020)
- PDF:
- https://preview.aclanthology.org/auto-file-uploads/2020.insights-1.2.pdf
- Data
- BoolQ, ConceptNet, MultiNLI, SuperGLUE