Abstract
A common thread of open-domain question answering (QA) models employs a retriever-reader pipeline that first retrieves a handful of relevant passages from Wikipedia and then peruses the passages to produce an answer. However, even state-of-the-art readers fail to capture the complex relationships between entities appearing in questions and retrieved passages, leading to answers that contradict the facts. In light of this, we propose a novel knowledge graph enhanced passage reader, namely Grape, to improve the reader performance for open-domain QA. Specifically, for each pair of question and retrieved passage, we first construct a localized bipartite graph, attributed to entity embeddings extracted from the intermediate layer of the reader model. Then, a graph neural network learns relational knowledge while fusing graph and contextual representations into the hidden states of the reader model. Experiments on three open-domain QA benchmarks show Grape can improve the state-of-the-art performance by up to 2.2 exact match score with a negligible overhead increase, with the same retriever and retrieved passages. Our code is publicly available at https://github.com/jumxglhf/GRAPE.- Anthology ID:
- 2022.findings-emnlp.13
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2022
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 169–181
- Language:
- URL:
- https://aclanthology.org/2022.findings-emnlp.13
- DOI:
- 10.18653/v1/2022.findings-emnlp.13
- Cite (ACL):
- Mingxuan Ju, Wenhao Yu, Tong Zhao, Chuxu Zhang, and Yanfang Ye. 2022. Grape: Knowledge Graph Enhanced Passage Reader for Open-domain Question Answering. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 169–181, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Cite (Informal):
- Grape: Knowledge Graph Enhanced Passage Reader for Open-domain Question Answering (Ju et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/remove-xml-comments/2022.findings-emnlp.13.pdf