Geert-Jan Houben
2019
Training Data Augmentation for Detecting Adverse Drug Reactions in User-Generated Content
Sepideh Mesbah
|
Jie Yang
|
Robert-Jan Sips
|
Manuel Valle Torre
|
Christoph Lofi
|
Alessandro Bozzon
|
Geert-Jan Houben
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Social media provides a timely yet challenging data source for adverse drug reaction (ADR) detection. Existing dictionary-based, semi-supervised learning approaches are intrinsically limited by the coverage and maintainability of laymen health vocabularies. In this paper, we introduce a data augmentation approach that leverages variational autoencoders to learn high-quality data distributions from a large unlabeled dataset, and subsequently, to automatically generate a large labeled training set from a small set of labeled samples. This allows for efficient social-media ADR detection with low training and re-training costs to adapt to the changes and emergence of informal medical laymen terms. An extensive evaluation performed on Twitter and Reddit data shows that our approach matches the performance of fully-supervised approaches while requiring only 25% of training data.
2018
Feature Engineering for Second Language Acquisition Modeling
Guanliang Chen
|
Claudia Hauff
|
Geert-Jan Houben
Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications
Knowledge tracing serves as a keystone in delivering personalized education. However, few works attempted to model students’ knowledge state in the setting of Second Language Acquisition. The Duolingo Shared Task on Second Language Acquisition Modeling provides students’ trace data that we extensively analyze and engineer features from for the task of predicting whether a student will correctly solve a vocabulary exercise. Our analyses of students’ learning traces reveal that factors like exercise format and engagement impact their exercise performance to a large extent. Overall, we extracted 23 different features as input to a Gradient Tree Boosting framework, which resulted in an AUC score of between 0.80 and 0.82 on the official test set.
Search
Co-authors
- Guanliang Chen 1
- Claudia Hauff 1
- Sepideh Mesbah 1
- Jie Yang 1
- Robert-Jan Sips 1
- show all...