Abstract
Non-autoregressive machine translation (NAT) approaches enable fast generation by utilizing parallelizable generative processes. The remaining bottleneck in these models is their decoder layers; unfortunately unlike in autoregressive models (Kasai et al., 2020), removing decoder layers from NAT models significantly degrades accuracy. This work proposes a sequence-to-lattice model that replaces the decoder with a search lattice. Our approach first constructs a candidate lattice using efficient lookup operations, generates lattice scores from a deep encoder, and finally finds the best path using dynamic programming. Experiments on three machine translation datasets show that our method is faster than past non-autoregressive generation approaches, and more accurate than naively reducing the number of decoder layers.- Anthology ID:
- 2021.findings-emnlp.318
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2021
- Month:
- November
- Year:
- 2021
- Address:
- Punta Cana, Dominican Republic
- Venue:
- Findings
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3765–3772
- Language:
- URL:
- https://aclanthology.org/2021.findings-emnlp.318
- DOI:
- 10.18653/v1/2021.findings-emnlp.318
- Cite (ACL):
- Yuntian Deng and Alexander Rush. 2021. Sequence-to-Lattice Models for Fast Translation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3765–3772, Punta Cana, Dominican Republic. Association for Computational Linguistics.
- Cite (Informal):
- Sequence-to-Lattice Models for Fast Translation (Deng & Rush, Findings 2021)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2021.findings-emnlp.318.pdf