Revisiting Context Choices for Context-aware Machine Translation

Matiss Rikters, Toshiaki Nakazawa


Abstract
One of the most popular methods for context-aware machine translation (MT) is to use separate encoders for the source sentence and context as multiple sources for one target sentence. Recent work has cast doubt on whether these models actually learn useful signals from the context or are improvements in automatic evaluation metrics just a side-effect. We show that multi-source transformer models improve MT over standard transformer-base models even with empty lines provided as context, but the translation quality improves significantly (1.51 - 2.65 BLEU) when a sufficient amount of correct context is provided. We also show that even though randomly shuffling in-domain context can also improve over baselines, the correct context further improves translation quality and random out-of-domain context further degrades it.
Anthology ID:
2024.lrec-main.1226
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
14073–14079
Language:
URL:
https://aclanthology.org/2024.lrec-main.1226
DOI:
Bibkey:
Cite (ACL):
Matiss Rikters and Toshiaki Nakazawa. 2024. Revisiting Context Choices for Context-aware Machine Translation. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 14073–14079, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Revisiting Context Choices for Context-aware Machine Translation (Rikters & Nakazawa, LREC-COLING 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2024.lrec-main.1226.pdf
Optional supplementary material:
 2024.lrec-main.1226.OptionalSupplementaryMaterial.zip