Neural sequence modelling for learner error prediction

Zheng Yuan


Abstract
This paper describes our use of two recurrent neural network sequence models: sequence labelling and sequence-to-sequence models, for the prediction of future learner errors in our submission to the 2018 Duolingo Shared Task on Second Language Acquisition Modeling (SLAM). We show that these two models capture complementary information as combining them improves performance. Furthermore, the same network architecture and group of features can be used directly to build competitive prediction models in all three language tracks, demonstrating that our approach generalises well across languages.
Anthology ID:
W18-0547
Volume:
Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Editors:
Joel Tetreault, Jill Burstein, Ekaterina Kochmar, Claudia Leacock, Helen Yannakoudakis
Venue:
BEA
SIG:
SIGEDU
Publisher:
Association for Computational Linguistics
Note:
Pages:
381–388
Language:
URL:
https://aclanthology.org/W18-0547
DOI:
10.18653/v1/W18-0547
Bibkey:
Cite (ACL):
Zheng Yuan. 2018. Neural sequence modelling for learner error prediction. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 381–388, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
Neural sequence modelling for learner error prediction (Yuan, BEA 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-1/W18-0547.pdf