Abstract
We compare the fast training and decoding speed of RETURNN of attention models for translation, due to fast CUDA LSTM kernels, and a fast pure TensorFlow beam search decoder. We show that a layer-wise pretraining scheme for recurrent attention models gives over 1% BLEU improvement absolute and it allows to train deeper recurrent encoder networks. Promising preliminary results on max. expected BLEU training are presented. We are able to train state-of-the-art models for translation and end-to-end models for speech recognition and show results on WMT 2017 and Switchboard. The flexibility of RETURNN allows a fast research feedback loop to experiment with alternative architectures, and its generality allows to use it on a wide range of applications.- Anthology ID:
- P18-4022
- Volume:
- Proceedings of ACL 2018, System Demonstrations
- Month:
- July
- Year:
- 2018
- Address:
- Melbourne, Australia
- Editors:
- Fei Liu, Thamar Solorio
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 128–133
- Language:
- URL:
- https://aclanthology.org/P18-4022
- DOI:
- 10.18653/v1/P18-4022
- Cite (ACL):
- Albert Zeyer, Tamer Alkhouli, and Hermann Ney. 2018. RETURNN as a Generic Flexible Neural Toolkit with Application to Translation and Speech Recognition. In Proceedings of ACL 2018, System Demonstrations, pages 128–133, Melbourne, Australia. Association for Computational Linguistics.
- Cite (Informal):
- RETURNN as a Generic Flexible Neural Toolkit with Application to Translation and Speech Recognition (Zeyer et al., ACL 2018)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-2/P18-4022.pdf
- Code
- rwth-i6/returnn + additional community code