A Large Scale Quantitative Exploration of Modeling Strategies for Content Scoring

Nitin Madnani, Anastassia Loukina, Aoife Cahill


Abstract
We explore various supervised learning strategies for automated scoring of content knowledge for a large corpus of 130 different content-based questions spanning four subject areas (Science, Math, English Language Arts, and Social Studies) and containing over 230,000 responses scored by human raters. Based on our analyses, we provide specific recommendations for content scoring. These are based on patterns observed across multiple questions and assessments and are, therefore, likely to generalize to other scenarios and prove useful to the community as automated content scoring becomes more popular in schools and classrooms.
Anthology ID:
W17-5052
Volume:
Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Joel Tetreault, Jill Burstein, Claudia Leacock, Helen Yannakoudakis
Venue:
BEA
SIG:
SIGEDU
Publisher:
Association for Computational Linguistics
Note:
Pages:
457–467
Language:
URL:
https://aclanthology.org/W17-5052
DOI:
10.18653/v1/W17-5052
Bibkey:
Cite (ACL):
Nitin Madnani, Anastassia Loukina, and Aoife Cahill. 2017. A Large Scale Quantitative Exploration of Modeling Strategies for Content Scoring. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 457–467, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
A Large Scale Quantitative Exploration of Modeling Strategies for Content Scoring (Madnani et al., BEA 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/W17-5052.pdf