A Multilingual Multiway Evaluation Data Set for Structured Document Translation of Asian Languages

Bianka Buschbeck, Raj Dabre, Miriam Exel, Matthias Huck, Patrick Huy, Raphael Rubino, Hideki Tanaka


Abstract
Translation of structured content is an important application of machine translation, but the scarcity of evaluation data sets, especially for Asian languages, limits progress. In this paper we present a novel multilingual multiway evaluation data set for the translation of structured documents of the Asian languages Japanese, Korean and Chinese. We describe the data set, its creation process and important characteristics, followed by establishing and evaluating baselines using the direct translation as well as detag-project approaches. Our data set is well suited for multilingual evaluation, and it contains richer annotation tag sets than existing data sets. Our results show that massively multilingual translation models like M2M-100 and mBART-50 perform surprisingly well despite not being explicitly trained to handle structured content. The data set described in this paper and used in our experiments is released publicly.
Anthology ID:
2022.findings-aacl.23
Volume:
Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022
Month:
November
Year:
2022
Address:
Online only
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
237–245
Language:
URL:
https://aclanthology.org/2022.findings-aacl.23
DOI:
Bibkey:
Cite (ACL):
Bianka Buschbeck, Raj Dabre, Miriam Exel, Matthias Huck, Patrick Huy, Raphael Rubino, and Hideki Tanaka. 2022. A Multilingual Multiway Evaluation Data Set for Structured Document Translation of Asian Languages. In Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022, pages 237–245, Online only. Association for Computational Linguistics.
Cite (Informal):
A Multilingual Multiway Evaluation Data Set for Structured Document Translation of Asian Languages (Buschbeck et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.findings-aacl.23.pdf