No Answer is Better Than Wrong Answer: A Reflection Model for Document Level Machine Reading Comprehension

Xuguang Wang, Linjun Shou, Ming Gong, Nan Duan, Daxin Jiang


Abstract
The Natural Questions (NQ) benchmark set brings new challenges to Machine Reading Comprehension: the answers are not only at different levels of granularity (long and short), but also of richer types (including no-answer, yes/no, single-span and multi-span). In this paper, we target at this challenge and handle all answer types systematically. In particular, we propose a novel approach called Reflection Net which leverages a two-step training procedure to identify the no-answer and wrong-answer cases. Extensive experiments are conducted to verify the effectiveness of our approach. At the time of paper writing (May. 20, 2020), our approach achieved the top 1 on both long and short answer leaderboard, with F1 scores of 77.2 and 64.1, respectively.
Anthology ID:
2020.findings-emnlp.370
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4141–4150
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.370
DOI:
10.18653/v1/2020.findings-emnlp.370
Bibkey:
Cite (ACL):
Xuguang Wang, Linjun Shou, Ming Gong, Nan Duan, and Daxin Jiang. 2020. No Answer is Better Than Wrong Answer: A Reflection Model for Document Level Machine Reading Comprehension. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4141–4150, Online. Association for Computational Linguistics.
Cite (Informal):
No Answer is Better Than Wrong Answer: A Reflection Model for Document Level Machine Reading Comprehension (Wang et al., Findings 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/2020.findings-emnlp.370.pdf
Optional supplementary material:
 2020.findings-emnlp.370.OptionalSupplementaryMaterial.zip
Data
Natural Questions