Neural Machine Translation with Recurrent Attention Modeling

Zichao Yang, Zhiting Hu, Yuntian Deng, Chris Dyer, Alex Smola


Abstract
Knowing which words have been attended to in previous time steps while generating a translation is a rich source of information for predicting what words will be attended to in the future. We improve upon the attention model of Bahdanau et al. (2014) by explicitly modeling the relationship between previous and subsequent attention levels for each word using one recurrent network per input word. This architecture easily captures informative features, such as fertility and regularities in relative distortion. In experiments, we show our parameterization of attention improves translation quality.
Anthology ID:
E17-2061
Volume:
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
Month:
April
Year:
2017
Address:
Valencia, Spain
Editors:
Mirella Lapata, Phil Blunsom, Alexander Koller
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
383–387
Language:
URL:
https://aclanthology.org/E17-2061
DOI:
Bibkey:
Cite (ACL):
Zichao Yang, Zhiting Hu, Yuntian Deng, Chris Dyer, and Alex Smola. 2017. Neural Machine Translation with Recurrent Attention Modeling. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 383–387, Valencia, Spain. Association for Computational Linguistics.
Cite (Informal):
Neural Machine Translation with Recurrent Attention Modeling (Yang et al., EACL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/E17-2061.pdf