Distractor Generation for Multiple Choice Questions Using Learning to Rank

Chen Liang, Xiao Yang, Neisarg Dave, Drew Wham, Bart Pursel, C. Lee Giles


Abstract
We investigate how machine learning models, specifically ranking models, can be used to select useful distractors for multiple choice questions. Our proposed models can learn to select distractors that resemble those in actual exam questions, which is different from most existing unsupervised ontology-based and similarity-based methods. We empirically study feature-based and neural net (NN) based ranking models with experiments on the recently released SciQ dataset and our MCQL dataset. Experimental results show that feature-based ensemble learning methods (random forest and LambdaMART) outperform both the NN-based method and unsupervised baselines. These two datasets can also be used as benchmarks for distractor generation.
Anthology ID:
W18-0533
Volume:
Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Editors:
Joel Tetreault, Jill Burstein, Ekaterina Kochmar, Claudia Leacock, Helen Yannakoudakis
Venue:
BEA
SIG:
SIGEDU
Publisher:
Association for Computational Linguistics
Note:
Pages:
284–290
Language:
URL:
https://aclanthology.org/W18-0533
DOI:
10.18653/v1/W18-0533
Bibkey:
Cite (ACL):
Chen Liang, Xiao Yang, Neisarg Dave, Drew Wham, Bart Pursel, and C. Lee Giles. 2018. Distractor Generation for Multiple Choice Questions Using Learning to Rank. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 284–290, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
Distractor Generation for Multiple Choice Questions Using Learning to Rank (Liang et al., BEA 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/W18-0533.pdf
Data
SciQ