Synchronous Refinement for Neural Machine Translation

Kehai Chen, Masao Utiyama, Eiichiro Sumita, Rui Wang, Min Zhang


Abstract
Machine translation typically adopts an encoder-to-decoder framework, in which the decoder generates the target sentence word-by-word in an auto-regressive manner. However, the auto-regressive decoder faces a deep-rooted one-pass issue whereby each generated word is considered as one element of the final output regardless of whether it is correct or not. These generated wrong words further constitute the target historical context to affect the generation of subsequent target words. This paper proposes a novel synchronous refinement method to revise potential errors in the generated words by considering part of the target future context. Particularly, the proposed approach allows the auto-regressive decoder to refine the previously generated target words and generate the next target word synchronously. The experimental results on three widely-used machine translation tasks demonstrated the effectiveness of the proposed approach.
Anthology ID:
2022.findings-acl.235
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2986–2996
Language:
URL:
https://aclanthology.org/2022.findings-acl.235
DOI:
10.18653/v1/2022.findings-acl.235
Bibkey:
Cite (ACL):
Kehai Chen, Masao Utiyama, Eiichiro Sumita, Rui Wang, and Min Zhang. 2022. Synchronous Refinement for Neural Machine Translation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2986–2996, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Synchronous Refinement for Neural Machine Translation (Chen et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2022.findings-acl.235.pdf