Comprehension Based Question Answering using Bloom’s Taxonomy

Pritish Sahu, Michael Cogswell, Ajay Divakaran, Sara Rutherford-Quach


Abstract
Current pre-trained language models have lots of knowledge, but a more limited ability to use that knowledge. Bloom’s Taxonomy helps educators teach children how to use knowledge by categorizing comprehension skills, so we use it to analyze and improve the comprehension skills of large pre-trained language models. Our experiments focus on zero-shot question answering, using the taxonomy to provide proximal context that helps the model answer questions by being relevant to those questions. We show targeting context in this manner improves performance across 4 popular common sense question answer datasets.
Anthology ID:
2021.repl4nlp-1.3
Volume:
Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021)
Month:
August
Year:
2021
Address:
Online
Editors:
Anna Rogers, Iacer Calixto, Ivan Vulić, Naomi Saphra, Nora Kassner, Oana-Maria Camburu, Trapit Bansal, Vered Shwartz
Venue:
RepL4NLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20–28
Language:
URL:
https://aclanthology.org/2021.repl4nlp-1.3
DOI:
10.18653/v1/2021.repl4nlp-1.3
Bibkey:
Cite (ACL):
Pritish Sahu, Michael Cogswell, Ajay Divakaran, and Sara Rutherford-Quach. 2021. Comprehension Based Question Answering using Bloom’s Taxonomy. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021), pages 20–28, Online. Association for Computational Linguistics.
Cite (Informal):
Comprehension Based Question Answering using Bloom’s Taxonomy (Sahu et al., RepL4NLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2021.repl4nlp-1.3.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-4/2021.repl4nlp-1.3.mp4
Data
COPACommonsenseQAWinoGrande