Surprisingly Easy Hard-Attention for Sequence to Sequence Learning

Shiv Shankar, Siddhant Garg, Sunita Sarawagi


Abstract
In this paper we show that a simple beam approximation of the joint distribution between attention and output is an easy, accurate, and efficient attention mechanism for sequence to sequence learning. The method combines the advantage of sharp focus in hard attention and the implementation ease of soft attention. On five translation tasks we show effortless and consistent gains in BLEU compared to existing attention mechanisms.
Anthology ID:
D18-1065
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Editors:
Ellen Riloff, David Chiang, Julia Hockenmaier, Jun’ichi Tsujii
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
640–645
Language:
URL:
https://aclanthology.org/D18-1065
DOI:
10.18653/v1/D18-1065
Bibkey:
Cite (ACL):
Shiv Shankar, Siddhant Garg, and Sunita Sarawagi. 2018. Surprisingly Easy Hard-Attention for Sequence to Sequence Learning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 640–645, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Surprisingly Easy Hard-Attention for Sequence to Sequence Learning (Shankar et al., EMNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/ml4al-ingestion/D18-1065.pdf
Video:
 https://preview.aclanthology.org/ml4al-ingestion/D18-1065.mp4
Code
 sid7954/beam-joint-attention