Recovering Question Answering Errors via Query Revision

Semih Yavuz, Izzeddin Gur, Yu Su, Xifeng Yan


Abstract
The existing factoid QA systems often lack a post-inspection component that can help models recover from their own mistakes. In this work, we propose to crosscheck the corresponding KB relations behind the predicted answers and identify potential inconsistencies. Instead of developing a new model that accepts evidences collected from these relations, we choose to plug them back to the original questions directly and check if the revised question makes sense or not. A bidirectional LSTM is applied to encode revised questions. We develop a scoring mechanism over the revised question encodings to refine the predictions of a base QA system. This approach can improve the F1 score of STAGG (Yih et al., 2015), one of the leading QA systems, from 52.5% to 53.9% on WEBQUESTIONS data.
Anthology ID:
D17-1094
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
903–909
Language:
URL:
https://aclanthology.org/D17-1094
DOI:
10.18653/v1/D17-1094
Bibkey:
Cite (ACL):
Semih Yavuz, Izzeddin Gur, Yu Su, and Xifeng Yan. 2017. Recovering Question Answering Errors via Query Revision. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 903–909, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Recovering Question Answering Errors via Query Revision (Yavuz et al., EMNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/D17-1094.pdf
Attachment:
 D17-1094.Attachment.zip