YNU_Deep at SemEval-2018 Task 11: An Ensemble of Attention-based BiLSTM Models for Machine Comprehension

Peng Ding, Xiaobing Zhou


Abstract
We firstly use GloVe to learn the distributed representations automatically from the instance, question and answer triples. Then an attentionbased Bidirectional LSTM (BiLSTM) model is used to encode the triples. We also perform a simple ensemble method to improve the effectiveness of our model. The system we developed obtains an encouraging result on this task. It achieves the accuracy 0.7472 on the test set. We rank 5th according to the official ranking.
Anthology ID:
S18-1174
Volume:
Proceedings of the 12th International Workshop on Semantic Evaluation
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Editors:
Marianna Apidianaki, Saif M. Mohammad, Jonathan May, Ekaterina Shutova, Steven Bethard, Marine Carpuat
Venue:
SemEval
SIG:
SIGLEX
Publisher:
Association for Computational Linguistics
Note:
Pages:
1043–1047
Language:
URL:
https://aclanthology.org/S18-1174
DOI:
10.18653/v1/S18-1174
Bibkey:
Cite (ACL):
Peng Ding and Xiaobing Zhou. 2018. YNU_Deep at SemEval-2018 Task 11: An Ensemble of Attention-based BiLSTM Models for Machine Comprehension. In Proceedings of the 12th International Workshop on Semantic Evaluation, pages 1043–1047, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
YNU_Deep at SemEval-2018 Task 11: An Ensemble of Attention-based BiLSTM Models for Machine Comprehension (Ding & Zhou, SemEval 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/S18-1174.pdf