QAInfomax: Learning Robust Question Answering System by Mutual Information Maximization

Yi-Ting Yeh, Yun-Nung Chen


Abstract
Standard accuracy metrics indicate that modern reading comprehension systems have achieved strong performance in many question answering datasets. However, the extent these systems truly understand language remains unknown, and existing systems are not good at distinguishing distractor sentences which look related but do not answer the question. To address this problem, we propose QAInfomax as a regularizer in reading comprehension systems by maximizing mutual information among passages, a question, and its answer. QAInfomax helps regularize the model to not simply learn the superficial correlation for answering the questions. The experiments show that our proposed QAInfomax achieves the state-of-the-art performance on the benchmark Adversarial-SQuAD dataset.
Anthology ID:
D19-1333
Volume:
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Month:
November
Year:
2019
Address:
Hong Kong, China
Venues:
EMNLP | IJCNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
3370–3375
Language:
URL:
https://aclanthology.org/D19-1333
DOI:
10.18653/v1/D19-1333
Bibkey:
Cite (ACL):
Yi-Ting Yeh and Yun-Nung Chen. 2019. QAInfomax: Learning Robust Question Answering System by Mutual Information Maximization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3370–3375, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
QAInfomax: Learning Robust Question Answering System by Mutual Information Maximization (Yeh & Chen, EMNLP-IJCNLP 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/D19-1333.pdf
Attachment:
 D19-1333.Attachment.zip
Code
 MiuLab/QAInfomax
Data
SQuAD