To Test Machine Comprehension, Start by Defining Comprehension

Jesse Dunietz, Greg Burnham, Akash Bharadwaj, Owen Rambow, Jennifer Chu-Carroll, Dave Ferrucci


Abstract
Many tasks aim to measure machine reading comprehension (MRC), often focusing on question types presumed to be difficult. Rarely, however, do task designers start by considering what systems should in fact comprehend. In this paper we make two key contributions. First, we argue that existing approaches do not adequately define comprehension; they are too unsystematic about what content is tested. Second, we present a detailed definition of comprehension—a “Template of Understanding”—for a widely useful class of texts, namely short narratives. We then conduct an experiment that strongly suggests existing systems are not up to the task of narrative understanding as we define it.
Anthology ID:
2020.acl-main.701
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7839–7859
Language:
URL:
https://aclanthology.org/2020.acl-main.701
DOI:
10.18653/v1/2020.acl-main.701
Bibkey:
Cite (ACL):
Jesse Dunietz, Greg Burnham, Akash Bharadwaj, Owen Rambow, Jennifer Chu-Carroll, and Dave Ferrucci. 2020. To Test Machine Comprehension, Start by Defining Comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7839–7859, Online. Association for Computational Linguistics.
Cite (Informal):
To Test Machine Comprehension, Start by Defining Comprehension (Dunietz et al., ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2020.acl-main.701.pdf
Dataset:
 2020.acl-main.701.Dataset.tgz
Video:
 http://slideslive.com/38928793
Data
CosmosQADROPGLUENewsQAQASCQuorefRACEReCoRDSQuADSearchQATriviaQAWikiHop