Learning to Paraphrase for Question Answering

Li Dong, Jonathan Mallinson, Siva Reddy, Mirella Lapata


Abstract
Question answering (QA) systems are sensitive to the many different ways natural language expresses the same information need. In this paper we turn to paraphrases as a means of capturing this knowledge and present a general framework which learns felicitous paraphrases for various QA tasks. Our method is trained end-to-end using question-answer pairs as a supervision signal. A question and its paraphrases serve as input to a neural scoring model which assigns higher weights to linguistic expressions most likely to yield correct answers. We evaluate our approach on QA over Freebase and answer sentence selection. Experimental results on three datasets show that our framework consistently improves performance, achieving competitive results despite the use of simple QA models.
Anthology ID:
D17-1091
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
875–886
Language:
URL:
https://preview.aclanthology.org/build-pipeline-with-new-library/D17-1091/
DOI:
10.18653/v1/D17-1091
Bibkey:
Cite (ACL):
Li Dong, Jonathan Mallinson, Siva Reddy, and Mirella Lapata. 2017. Learning to Paraphrase for Question Answering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 875–886, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Learning to Paraphrase for Question Answering (Dong et al., EMNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/build-pipeline-with-new-library/D17-1091.pdf