YNUDLG at IJCNLP-2017 Task 5: A CNN-LSTM Model with Attention for Multi-choice Question Answering in Examinations

Min Wang, Qingxun Liu, Peng Ding, Yongbin Li, Xiaobing Zhou


Abstract
In this paper, we perform convolutional neural networks (CNN) to learn the joint representations of question-answer pairs first, then use the joint representations as the inputs of the long short-term memory (LSTM) with attention to learn the answer sequence of a question for labeling the matching quality of each answer. We also incorporating external knowledge by training Word2Vec on Flashcards data, thus we get more compact embedding. Experimental results show that our method achieves better or comparable performance compared with the baseline system. The proposed approach achieves the accuracy of 0.39, 0.42 in English valid set, test set, respectively.
Anthology ID:
I17-4032
Volume:
Proceedings of the IJCNLP 2017, Shared Tasks
Month:
December
Year:
2017
Address:
Taipei, Taiwan
Editors:
Chao-Hong Liu, Preslav Nakov, Nianwen Xue
Venue:
IJCNLP
SIG:
Publisher:
Asian Federation of Natural Language Processing
Note:
Pages:
194–198
Language:
URL:
https://aclanthology.org/I17-4032
DOI:
Bibkey:
Cite (ACL):
Min Wang, Qingxun Liu, Peng Ding, Yongbin Li, and Xiaobing Zhou. 2017. YNUDLG at IJCNLP-2017 Task 5: A CNN-LSTM Model with Attention for Multi-choice Question Answering in Examinations. In Proceedings of the IJCNLP 2017, Shared Tasks, pages 194–198, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Cite (Informal):
YNUDLG at IJCNLP-2017 Task 5: A CNN-LSTM Model with Attention for Multi-choice Question Answering in Examinations (Wang et al., IJCNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/I17-4032.pdf