Generalizing Question Answering System with Pre-trained Language Model Fine-tuning

Dan Su, Yan Xu, Genta Indra Winata, Peng Xu, Hyeondey Kim, Zihan Liu, Pascale Fung


Abstract
With a large number of datasets being released and new techniques being proposed, Question answering (QA) systems have witnessed great breakthroughs in reading comprehension (RC)tasks. However, most existing methods focus on improving in-domain performance, leaving open the research question of how these mod-els and techniques can generalize to out-of-domain and unseen RC tasks. To enhance the generalization ability, we propose a multi-task learning framework that learns the shared representation across different tasks. Our model is built on top of a large pre-trained language model, such as XLNet, and then fine-tuned on multiple RC datasets. Experimental results show the effectiveness of our methods, with an average Exact Match score of 56.59 and an average F1 score of 68.98, which significantly improves the BERT-Large baseline by8.39 and 7.22, respectively
Anthology ID:
D19-5827
Volume:
Proceedings of the 2nd Workshop on Machine Reading for Question Answering
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, Danqi Chen
Venue:
WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
203–211
Language:
URL:
https://aclanthology.org/D19-5827
DOI:
10.18653/v1/D19-5827
Bibkey:
Cite (ACL):
Dan Su, Yan Xu, Genta Indra Winata, Peng Xu, Hyeondey Kim, Zihan Liu, and Pascale Fung. 2019. Generalizing Question Answering System with Pre-trained Language Model Fine-tuning. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 203–211, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Generalizing Question Answering System with Pre-trained Language Model Fine-tuning (Su et al., 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/D19-5827.pdf