Question Answering through Transfer Learning from Large Fine-grained Supervision Data

Sewon Min, Minjoon Seo, Hannaneh Hajishirzi


Abstract
We show that the task of question answering (QA) can significantly benefit from the transfer learning of models trained on a different large, fine-grained QA dataset. We achieve the state of the art in two well-studied QA datasets, WikiQA and SemEval-2016 (Task 3A), through a basic transfer learning technique from SQuAD. For WikiQA, our model outperforms the previous best model by more than 8%. We demonstrate that finer supervision provides better guidance for learning lexical and syntactic information than coarser supervision, through quantitative results and visual analysis. We also show that a similar transfer learning procedure achieves the state of the art on an entailment task.
Anthology ID:
P17-2081
Volume:
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2017
Address:
Vancouver, Canada
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
510–517
Language:
URL:
https://aclanthology.org/P17-2081
DOI:
10.18653/v1/P17-2081
Bibkey:
Cite (ACL):
Sewon Min, Minjoon Seo, and Hannaneh Hajishirzi. 2017. Question Answering through Transfer Learning from Large Fine-grained Supervision Data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 510–517, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Question Answering through Transfer Learning from Large Fine-grained Supervision Data (Min et al., ACL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/P17-2081.pdf
Data
SICKSNLISQuADWikiQA