Anticipation-Free Training for Simultaneous Machine Translation

Chih-Chiang Chang, Shun-Po Chuang, Hung-yi Lee


Abstract
Simultaneous machine translation (SimulMT) speeds up the translation process by starting to translate before the source sentence is completely available. It is difficult due to limited context and word order difference between languages. Existing methods increase latency or introduce adaptive read-write policies for SimulMT models to handle local reordering and improve translation quality. However, the long-distance reordering would make the SimulMT models learn translation mistakenly. Specifically, the model may be forced to predict target tokens when the corresponding source tokens have not been read. This leads to aggressive anticipation during inference, resulting in the hallucination phenomenon. To mitigate this problem, we propose a new framework that decompose the translation process into the monotonic translation step and the reordering step, and we model the latter by the auxiliary sorting network (ASN). The ASN rearranges the hidden states to match the order in the target language, so that the SimulMT model could learn to translate more reasonably. The entire model is optimized end-to-end and does not rely on external aligners or data. During inference, ASN is removed to achieve streaming. Experiments show the proposed framework could outperform previous methods with less latency.
Anthology ID:
2022.iwslt-1.5
Volume:
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
Month:
May
Year:
2022
Address:
Dublin, Ireland (in-person and online)
Editors:
Elizabeth Salesky, Marcello Federico, Marta Costa-jussà
Venue:
IWSLT
SIG:
SIGSLT
Publisher:
Association for Computational Linguistics
Note:
Pages:
43–61
Language:
URL:
https://aclanthology.org/2022.iwslt-1.5
DOI:
10.18653/v1/2022.iwslt-1.5
Bibkey:
Cite (ACL):
Chih-Chiang Chang, Shun-Po Chuang, and Hung-yi Lee. 2022. Anticipation-Free Training for Simultaneous Machine Translation. In Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022), pages 43–61, Dublin, Ireland (in-person and online). Association for Computational Linguistics.
Cite (Informal):
Anticipation-Free Training for Simultaneous Machine Translation (Chang et al., IWSLT 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/2022.iwslt-1.5.pdf
Code
 george0828zhang/sinkhorn-simultrans