Perhaps PTLMs Should Go to School – A Task to Assess Open Book and Closed Book QA
Manuel Ciosici, Joe Cecil, Dong-Ho Lee, Alex Hedges, Marjorie Freedman, Ralph Weischedel
Abstract
Our goal is to deliver a new task and leaderboard to stimulate research on question answering and pre-trained language models (PTLMs) to understand a significant instructional document, e.g., an introductory college textbook or a manual. PTLMs have shown great success in many question-answering tasks, given significant supervised training, but much less so in zero-shot settings. We propose a new task that includes two college-level introductory texts in the social sciences (American Government 2e) and humanities (U.S. History), hundreds of true/false statements based on review questions written by the textbook authors, validation/development tests based on the first eight chapters of the textbooks, blind tests based on the remaining textbook chapters, and baseline results given state-of-the-art PTLMs. Since the questions are balanced, random performance should be ~50%. T5, fine-tuned with BoolQ achieves the same performance, suggesting that the textbook’s content is not pre-represented in the PTLM. Taking the exam closed book, but having read the textbook (i.e., adding the textbook to T5’s pre-training), yields at best minor improvement (56%), suggesting that the PTLM may not have “understood” the textbook (or perhaps misunderstood the questions). Performance is better (~60%) when the exam is taken open-book (i.e., allowing the machine to automatically retrieve a paragraph and use it to answer the question).- Anthology ID:
- 2021.emnlp-main.493
- Volume:
- Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
- Month:
- November
- Year:
- 2021
- Address:
- Online and Punta Cana, Dominican Republic
- Editors:
- Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 6104–6111
- Language:
- URL:
- https://aclanthology.org/2021.emnlp-main.493
- DOI:
- 10.18653/v1/2021.emnlp-main.493
- Cite (ACL):
- Manuel Ciosici, Joe Cecil, Dong-Ho Lee, Alex Hedges, Marjorie Freedman, and Ralph Weischedel. 2021. Perhaps PTLMs Should Go to School – A Task to Assess Open Book and Closed Book QA. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6104–6111, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
- Cite (Informal):
- Perhaps PTLMs Should Go to School – A Task to Assess Open Book and Closed Book QA (Ciosici et al., EMNLP 2021)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/2021.emnlp-main.493.pdf
- Data
- BoolQ, SQuAD