Syntactically Supervised Transformers for Faster Neural Machine Translation

Nader Akoury, Kalpesh Krishna, Mohit Iyyer


Abstract
Standard decoders for neural machine translation autoregressively generate a single target token per timestep, which slows inference especially for long outputs. While architectural advances such as the Transformer fully parallelize the decoder computations at training time, inference still proceeds sequentially. Recent developments in non- and semi-autoregressive decoding produce multiple tokens per timestep independently of the others, which improves inference speed but deteriorates translation quality. In this work, we propose the syntactically supervised Transformer (SynST), which first autoregressively predicts a chunked parse tree before generating all of the target tokens in one shot conditioned on the predicted parse. A series of controlled experiments demonstrates that SynST decodes sentences ~5x faster than the baseline autoregressive Transformer while achieving higher BLEU scores than most competing methods on En-De and En-Fr datasets.
Anthology ID:
P19-1122
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1269–1281
Language:
URL:
https://aclanthology.org/P19-1122
DOI:
10.18653/v1/P19-1122
Bibkey:
Cite (ACL):
Nader Akoury, Kalpesh Krishna, and Mohit Iyyer. 2019. Syntactically Supervised Transformers for Faster Neural Machine Translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1269–1281, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Syntactically Supervised Transformers for Faster Neural Machine Translation (Akoury et al., ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/P19-1122.pdf
Poster:
 P19-1122.Poster.pdf
Code
 dojoteef/synst