MR-P: A Parallel Decoding Algorithm for Iterative Refinement Non-Autoregressive Translation

Hao Cheng, Zhihua Zhang


Abstract
Non-autoregressive translation (NAT) predicts all the target tokens in parallel and significantly speeds up the inference process. The Conditional Masked Language Model (CMLM) is a strong baseline of NAT. It decodes with the Mask-Predict algorithm which iteratively refines the output. Most works about CMLM focus on the model structure and the training objective. However, the decoding algorithm is equally important. We propose a simple, effective, and easy-to-implement decoding algorithm that we call MaskRepeat-Predict (MR-P). The MR-P algorithm gives higher priority to consecutive repeated tokens when selecting tokens to mask for the next iteration and stops the iteration after target tokens converge. We conduct extensive experiments on six translation directions with varying data sizes. The results show that MR-P significantly improves the performance with the same model parameters. Specifically, we achieve a BLEU increase of 1.39 points in the WMT’14 En-De translation task.
Anthology ID:
2022.findings-acl.25
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
285–296
Language:
URL:
https://preview.aclanthology.org/build-pipeline-with-new-library/2022.findings-acl.25/
DOI:
10.18653/v1/2022.findings-acl.25
Bibkey:
Cite (ACL):
Hao Cheng and Zhihua Zhang. 2022. MR-P: A Parallel Decoding Algorithm for Iterative Refinement Non-Autoregressive Translation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 285–296, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
MR-P: A Parallel Decoding Algorithm for Iterative Refinement Non-Autoregressive Translation (Cheng & Zhang, Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/build-pipeline-with-new-library/2022.findings-acl.25.pdf