Using exemplar responses for training and evaluating automated speech scoring systems
Anastassia Loukina, Klaus Zechner, James Bruno, Beata Beigman Klebanov
Abstract
Automated scoring engines are usually trained and evaluated against human scores and compared to the benchmark of human-human agreement. In this paper we compare the performance of an automated speech scoring engine using two corpora: a corpus of almost 700,000 randomly sampled spoken responses with scores assigned by one or two raters during operational scoring, and a corpus of 16,500 exemplar responses with scores reviewed by multiple expert raters. We show that the choice of corpus used for model evaluation has a major effect on estimates of system performance with r varying between 0.64 and 0.80. Surprisingly, this is not the case for the choice of corpus for model training: when the training corpus is sufficiently large, the systems trained on different corpora showed almost identical performance when evaluated on the same corpus. We show that this effect is consistent across several learning algorithms. We conclude that evaluating the model on a corpus of exemplar responses if one is available provides additional evidence about system validity; at the same time, investing effort into creating a corpus of exemplar responses for model training is unlikely to lead to a substantial gain in model performance.- Anthology ID:
- W18-0501
- Volume:
- Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications
- Month:
- June
- Year:
- 2018
- Address:
- New Orleans, Louisiana
- Editors:
- Joel Tetreault, Jill Burstein, Ekaterina Kochmar, Claudia Leacock, Helen Yannakoudakis
- Venue:
- BEA
- SIG:
- SIGEDU
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1–12
- Language:
- URL:
- https://aclanthology.org/W18-0501
- DOI:
- 10.18653/v1/W18-0501
- Cite (ACL):
- Anastassia Loukina, Klaus Zechner, James Bruno, and Beata Beigman Klebanov. 2018. Using exemplar responses for training and evaluating automated speech scoring systems. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 1–12, New Orleans, Louisiana. Association for Computational Linguistics.
- Cite (Informal):
- Using exemplar responses for training and evaluating automated speech scoring systems (Loukina et al., BEA 2018)
- PDF:
- https://preview.aclanthology.org/improve-issue-templates/W18-0501.pdf