A Gated Self-attention Memory Network for Answer Selection

Tuan Lai, Quan Hung Tran, Trung Bui, Daisuke Kihara


Abstract
Answer selection is an important research problem, with applications in many areas. Previous deep learning based approaches for the task mainly adopt the Compare-Aggregate architecture that performs word-level comparison followed by aggregation. In this work, we take a departure from the popular Compare-Aggregate architecture, and instead, propose a new gated self-attention memory network for the task. Combined with a simple transfer learning technique from a large-scale online corpus, our model outperforms previous methods by a large margin, achieving new state-of-the-art results on two standard answer selection datasets: TrecQA and WikiQA.
Anthology ID:
D19-1610
Volume:
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Kentaro Inui, Jing Jiang, Vincent Ng, Xiaojun Wan
Venues:
EMNLP | IJCNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
5953–5959
Language:
URL:
https://aclanthology.org/D19-1610
DOI:
10.18653/v1/D19-1610
Bibkey:
Cite (ACL):
Tuan Lai, Quan Hung Tran, Trung Bui, and Daisuke Kihara. 2019. A Gated Self-attention Memory Network for Answer Selection. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5953–5959, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
A Gated Self-attention Memory Network for Answer Selection (Lai et al., EMNLP-IJCNLP 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/D19-1610.pdf
Code
 laituan245/StackExchangeQA
Data
WikiQA