2019
pdf
abs
Toward Automated Content Feedback Generation for Non-native Spontaneous Speech
Su-Youn Yoon
|
Ching-Ni Hsieh
|
Klaus Zechner
|
Matthew Mulholland
|
Yuan Wang
|
Nitin Madnani
Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications
In this study, we developed an automated algorithm to provide feedback about the specific content of non-native English speakers’ spoken responses. The responses were spontaneous speech, elicited using integrated tasks where the language learners listened to and/or read passages and integrated the core content in their spoken responses. Our models detected the absence of key points considered to be important in a spoken response to a particular test question, based on two different models: (a) a model using word-embedding based content features and (b) a state-of-the art short response scoring engine using traditional n-gram based features. Both models achieved a substantially improved performance over the majority baseline, and the combination of the two models achieved a significant further improvement. In particular, the models were robust to automated speech recognition (ASR) errors, and performance based on the ASR word hypotheses was comparable to that based on manual transcriptions. The accuracy and F-score of the best model for the questions included in the train set were 0.80 and 0.68, respectively. Finally, we discussed possible approaches to generating targeted feedback about the content of a language learner’s response, based on automatically detected missing key points.
pdf
abs
Application of an Automatic Plagiarism Detection System in a Large-scale Assessment of English Speaking Proficiency
Xinhao Wang
|
Keelan Evanini
|
Matthew Mulholland
|
Yao Qian
|
James V. Bruno
Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications
This study aims to build an automatic system for the detection of plagiarized spoken responses in the context of an assessment of English speaking proficiency for non-native speakers. Classification models were trained to distinguish between plagiarized and non-plagiarized responses with two different types of features: text-to-text content similarity measures, which are commonly used in the task of plagiarism detection for written documents, and speaking proficiency measures, which were specifically designed for spontaneous speech and extracted using an automated speech scoring system. The experiments were first conducted on a large data set drawn from an operational English proficiency assessment across multiple years, and the best classifier on this heavily imbalanced data set resulted in an F1-score of 0.761 on the plagiarized class. This system was then validated on operational responses collected from a single administration of the assessment and achieved a recall of 0.897. The results indicate that the proposed system can potentially be used to improve the validity of both human and automated assessment of non-native spoken English.
pdf
abs
Scoring Interactional Aspects of Human-Machine Dialog for Language Learning and Assessment using Text Features
Vikram Ramanarayanan
|
Matthew Mulholland
|
Yao Qian
Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue
While there has been much work in the language learning and assessment literature on human and automated scoring of essays and short constructed responses, there is little to no work examining text features for scoring of dialog data, particularly interactional aspects thereof, to assess conversational proficiency over and above constructed response skills. Our work bridges this gap by investigating both human and automated approaches towards scoring human–machine text dialog in the context of a real-world language learning application. We collected conversational data of human learners interacting with a cloud-based standards-compliant dialog system, triple-scored these data along multiple dimensions of conversational proficiency, and then analyzed the performance trends. We further examined two different approaches to automated scoring of such data and show that these approaches are able to perform at or above par with human agreement for a majority of dimensions of the scoring rubric.
2018
pdf
bib
abs
Word-Embedding based Content Features for Automated Oral Proficiency Scoring
Su-Youn Yoon
|
Anastassia Loukina
|
Chong Min Lee
|
Matthew Mulholland
|
Xinhao Wang
|
Ikkyu Choi
Proceedings of the Third Workshop on Semantic Deep Learning
In this study, we develop content features for an automated scoring system of non-native English speakers’ spontaneous speech. The features calculate the lexical similarity between the question text and the ASR word hypothesis of the spoken response, based on traditional word vector models or word embeddings. The proposed features do not require any sample training responses for each question, and this is a strong advantage since collecting question-specific data is an expensive task, and sometimes even impossible due to concerns about question exposure. We explore the impact of these new features on the automated scoring of two different question types: (a) providing opinions on familiar topics and (b) answering a question about a stimulus material. The proposed features showed statistically significant correlations with the oral proficiency scores, and the combination of new features with the speech-driven features achieved a small but significant further improvement for the latter question type. Further analyses suggested that the new features were effective in assigning more accurate scores for responses with serious content issues.
2016
pdf
Enhancing STEM Motivation through Personal and Communal Values: NLP for Assessment of Utility Value in Student Writing
Beata Beigman Klebanov
|
Jill Burstein
|
Judith Harackiewicz
|
Stacy Priniski
|
Matthew Mulholland
Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications
2014
pdf
Predicting Grammaticality on an Ordinal Scale
Michael Heilman
|
Aoife Cahill
|
Nitin Madnani
|
Melissa Lopez
|
Matthew Mulholland
|
Joel Tetreault
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
2013
pdf
Suicidal Tendencies: The Automatic Classification of Suicidal and Non-Suicidal Lyricists Using NLP
Matthew Mulholland
|
Joanne Quinn
Proceedings of the Sixth International Joint Conference on Natural Language Processing