Evaluation of Automatically Generated Pronoun Reference Questions

Arief Yudha Satria, Takenobu Tokunaga


Abstract
This study provides a detailed analysis of evaluation of English pronoun reference questions which are created automatically by machine. Pronoun reference questions are multiple choice questions that ask test takers to choose an antecedent of a target pronoun in a reading passage from four options. The evaluation was performed from two perspectives: the perspective of English teachers and that of English learners. Item analysis suggests that machine-generated questions achieve comparable quality with human-made questions. Correlation analysis revealed a strong correlation between the scores of machine-generated questions and that of human-made questions.
Anthology ID:
W17-5008
Volume:
Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Venue:
BEA
SIG:
SIGEDU
Publisher:
Association for Computational Linguistics
Note:
Pages:
76–85
Language:
URL:
https://aclanthology.org/W17-5008
DOI:
10.18653/v1/W17-5008
Bibkey:
Cite (ACL):
Arief Yudha Satria and Takenobu Tokunaga. 2017. Evaluation of Automatically Generated Pronoun Reference Questions. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 76–85, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Evaluation of Automatically Generated Pronoun Reference Questions (Satria & Tokunaga, BEA 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/W17-5008.pdf