Abstract
In this work, we analyze how human gaze during reading comprehension is conditioned on the given reading comprehension question, and whether this signal can be beneficial for machine reading comprehension. To this end, we collect a new eye-tracking dataset with a large number of participants engaging in a multiple choice reading comprehension task. Our analysis of this data reveals increased fixation times over parts of the text that are most relevant for answering the question. Motivated by this finding, we propose making automated reading comprehension more human-like by mimicking human information-seeking reading behavior during reading comprehension. We demonstrate that this approach leads to performance gains on multiple choice question answering in English for a state-of-the-art reading comprehension model.- Anthology ID:
- 2020.conll-1.11
- Volume:
- Proceedings of the 24th Conference on Computational Natural Language Learning
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Editors:
- Raquel Fernández, Tal Linzen
- Venue:
- CoNLL
- SIG:
- SIGNLL
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 142–152
- Language:
- URL:
- https://aclanthology.org/2020.conll-1.11
- DOI:
- 10.18653/v1/2020.conll-1.11
- Cite (ACL):
- Jonathan Malmaud, Roger Levy, and Yevgeni Berzak. 2020. Bridging Information-Seeking Human Gaze and Machine Reading Comprehension. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 142–152, Online. Association for Computational Linguistics.
- Cite (Informal):
- Bridging Information-Seeking Human Gaze and Machine Reading Comprehension (Malmaud et al., CoNLL 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-5/2020.conll-1.11.pdf
- Data
- OneStopEnglish, OneStopQA, RACE