How Does Selective Mechanism Improve Self-Attention Networks?

Xinwei Geng, Longyue Wang, Xing Wang, Bing Qin, Ting Liu, Zhaopeng Tu


Abstract
Self-attention networks (SANs) with selective mechanism has produced substantial improvements in various NLP tasks by concentrating on a subset of input words. However, the underlying reasons for their strong performance have not been well explained. In this paper, we bridge the gap by assessing the strengths of selective SANs (SSANs), which are implemented with a flexible and universal Gumbel-Softmax. Experimental results on several representative NLP tasks, including natural language inference, semantic role labelling, and machine translation, show that SSANs consistently outperform the standard SANs. Through well-designed probing experiments, we empirically validate that the improvement of SSANs can be attributed in part to mitigating two commonly-cited weaknesses of SANs: word order encoding and structure modeling. Specifically, the selective mechanism improves SANs by paying more attention to content words that contribute to the meaning of the sentence.
Anthology ID:
2020.acl-main.269
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2986–2995
Language:
URL:
https://aclanthology.org/2020.acl-main.269
DOI:
10.18653/v1/2020.acl-main.269
Bibkey:
Cite (ACL):
Xinwei Geng, Longyue Wang, Xing Wang, Bing Qin, Ting Liu, and Zhaopeng Tu. 2020. How Does Selective Mechanism Improve Self-Attention Networks?. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2986–2995, Online. Association for Computational Linguistics.
Cite (Informal):
How Does Selective Mechanism Improve Self-Attention Networks? (Geng et al., ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2020.acl-main.269.pdf
Video:
 http://slideslive.com/38929046
Code
 xwgeng/SSAN
Data
SNLI