A Novel Cascade Model for Learning Latent Similarity from Heterogeneous Sequential Data of MOOC

Zhuoxuan Jiang, Shanshan Feng, Gao Cong, Chunyan Miao, Xiaoming Li


Abstract
Recent years have witnessed the proliferation of Massive Open Online Courses (MOOCs). With massive learners being offered MOOCs, there is a demand that the forum contents within MOOCs need to be classified in order to facilitate both learners and instructors. Therefore we investigate a significant application, which is to associate forum threads to subtitles of video clips. This task can be regarded as a document ranking problem, and the key is how to learn a distinguishable text representation from word sequences and learners’ behavior sequences. In this paper, we propose a novel cascade model, which can capture both the latent semantics and latent similarity by modeling MOOC data. Experimental results on two real-world datasets demonstrate that our textual representation outperforms state-of-the-art unsupervised counterparts for the application.
Anthology ID:
D17-1293
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2768–2773
Language:
URL:
https://aclanthology.org/D17-1293
DOI:
10.18653/v1/D17-1293
Bibkey:
Cite (ACL):
Zhuoxuan Jiang, Shanshan Feng, Gao Cong, Chunyan Miao, and Xiaoming Li. 2017. A Novel Cascade Model for Learning Latent Similarity from Heterogeneous Sequential Data of MOOC. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2768–2773, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
A Novel Cascade Model for Learning Latent Similarity from Heterogeneous Sequential Data of MOOC (Jiang et al., EMNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/D17-1293.pdf