Multi-Granularity Optimization for Non-Autoregressive Translation

Yafu Li, Leyang Cui, Yongjing Yin, Yue Zhang


Abstract
Despite low latency, non-autoregressive machine translation (NAT) suffers severe performance deterioration due to the naive independence assumption. This assumption is further strengthened by cross-entropy loss, which encourages a strict match between the hypothesis and the reference token by token. To alleviate this issue, we propose multi-granularity optimization for NAT, which collects model behaviours on translation segments of various granularities and integrates feedback for backpropagation. Experiments on four WMT benchmarks show that the proposed method significantly outperforms the baseline models trained with cross-entropy loss, and achieves the best performance on WMT’16 En⇔Ro and highly competitive results on WMT’14 En⇔De for fully non-autoregressive translation.
Anthology ID:
2022.emnlp-main.339
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5073–5084
Language:
URL:
https://aclanthology.org/2022.emnlp-main.339
DOI:
10.18653/v1/2022.emnlp-main.339
Bibkey:
Cite (ACL):
Yafu Li, Leyang Cui, Yongjing Yin, and Yue Zhang. 2022. Multi-Granularity Optimization for Non-Autoregressive Translation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5073–5084, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Multi-Granularity Optimization for Non-Autoregressive Translation (Li et al., EMNLP 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/2022.emnlp-main.339.pdf