SIT at MixMT 2022: Fluent Translation Built on Giant Pre-trained Models

Abdul Khan, Hrishikesh Kanade, Girish Budhrani, Preet Jhanglani, Jia Xu


Abstract
This paper describes the Stevens Institute of Technology’s submission for the WMT 2022 Shared Task: Code-mixed Machine Translation (MixMT). The task consisted of two subtasks, subtask 1 Hindi/English to Hinglish and subtask 2 Hinglish to English translation. Our findings lie in the improvements made through the use of large pre-trained multilingual NMT models and in-domain datasets, as well as back-translation and ensemble techniques. The translation output is automatically evaluated against the reference translations using ROUGE-L and WER. Our system achieves the 1st position on subtask 2 according to ROUGE-L, WER, and human evaluation, 1st position on subtask 1 according to WER and human evaluation, and 3rd position on subtask 1 with respect to ROUGE-L metric.
Anthology ID:
2022.wmt-1.114
Volume:
Proceedings of the Seventh Conference on Machine Translation (WMT)
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates (Hybrid)
Venue:
WMT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1136–1144
Language:
URL:
https://aclanthology.org/2022.wmt-1.114
DOI:
Bibkey:
Cite (ACL):
Abdul Khan, Hrishikesh Kanade, Girish Budhrani, Preet Jhanglani, and Jia Xu. 2022. SIT at MixMT 2022: Fluent Translation Built on Giant Pre-trained Models. In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 1136–1144, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Cite (Informal):
SIT at MixMT 2022: Fluent Translation Built on Giant Pre-trained Models (Khan et al., WMT 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.wmt-1.114.pdf