Fast Neural Machine Translation Implementation
Hieu Hoang, Tomasz Dwojak, Rihards Krislauks, Daniel Torregrosa, Kenneth Heafield
Abstract
This paper describes the submissions to the efficiency track for GPUs at the Workshop for Neural Machine Translation and Generation by members of the University of Edinburgh, Adam Mickiewicz University, Tilde and University of Alicante. We focus on efficient implementation of the recurrent deep-learning model as implemented in Amun, the fast inference engine for neural machine translation. We improve the performance with an efficient mini-batching algorithm, and by fusing the softmax operation with the k-best extraction algorithm. Submissions using Amun were first, second and third fastest in the GPU efficiency track.- Anthology ID:
- W18-2714
- Volume:
- Proceedings of the 2nd Workshop on Neural Machine Translation and Generation
- Month:
- July
- Year:
- 2018
- Address:
- Melbourne, Australia
- Editors:
- Alexandra Birch, Andrew Finch, Thang Luong, Graham Neubig, Yusuke Oda
- Venue:
- NGT
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 116–121
- Language:
- URL:
- https://aclanthology.org/W18-2714
- DOI:
- 10.18653/v1/W18-2714
- Cite (ACL):
- Hieu Hoang, Tomasz Dwojak, Rihards Krislauks, Daniel Torregrosa, and Kenneth Heafield. 2018. Fast Neural Machine Translation Implementation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 116–121, Melbourne, Australia. Association for Computational Linguistics.
- Cite (Informal):
- Fast Neural Machine Translation Implementation (Hoang et al., NGT 2018)
- PDF:
- https://preview.aclanthology.org/naacl24-info/W18-2714.pdf