Quiz Design Task: Helping Teachers Create Quizzes with Automated Question Generation

Philippe Laban, Chien-Sheng Wu, Lidiya Murakhovs’ka, Wenhao Liu, Caiming Xiong


Abstract
Question generation (QGen) models are often evaluated with standardized NLG metrics that are based on n-gram overlap.In this paper, we measure whether these metric improvements translate to gains in a practical setting, focusing on the use case of helping teachers automate the generation of reading comprehension quizzes. In our study, teachers building a quiz receive question suggestions, which they can either accept or refuse with a reason. Even though we find that recent progress in QGen leads to a significant increase in question acceptance rates, there is still large room for improvement, with the best model having only 68.4% of its questions accepted by the ten teachers who participated in our study. We then leverage the annotations we collected to analyze standard NLG metrics and find that model performance has reached projected upper-bounds, suggesting new automatic metrics are needed to guide QGen research forward.
Anthology ID:
2022.findings-naacl.9
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
102–111
Language:
URL:
https://aclanthology.org/2022.findings-naacl.9
DOI:
10.18653/v1/2022.findings-naacl.9
Bibkey:
Cite (ACL):
Philippe Laban, Chien-Sheng Wu, Lidiya Murakhovs’ka, Wenhao Liu, and Caiming Xiong. 2022. Quiz Design Task: Helping Teachers Create Quizzes with Automated Question Generation. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 102–111, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Quiz Design Task: Helping Teachers Create Quizzes with Automated Question Generation (Laban et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2022.findings-naacl.9.pdf
Data
SQuAD