Enabling Unsupervised Neural Machine Translation with Word-level Visual Representations

Chengpeng Fu, Xiaocheng Feng, Yichong Huang, Wenshuai Huo, Hui Wang, Bing Qin, Ting Liu


Abstract
Unsupervised neural machine translation has recently made remarkable strides, achieving impressive results with the exclusive use of monolingual corpora. Nonetheless, these methods still exhibit fundamental flaws, such as confusing similar words. A straightforward remedy to rectify this drawback is to employ bilingual dictionaries, however, high-quality bilingual dictionaries can be costly to obtain. To overcome this limitation, we propose a method that incorporates images at the word level to augment the lexical mappings. Specifically, our method inserts visual representations into the model, modifying the corresponding embedding layer information. Besides, a visible matrix is adopted to isolate the impact of images on other unrelated words. Experiments on the Multi30k dataset with over 300,000 self-collected images validate the effectiveness in generating more accurate word translation, achieving an improvement of up to +2.81 BLEU score, which is comparable or even superior to using bilingual dictionaries.
Anthology ID:
2023.findings-emnlp.839
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12608–12618
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.839
DOI:
10.18653/v1/2023.findings-emnlp.839
Bibkey:
Cite (ACL):
Chengpeng Fu, Xiaocheng Feng, Yichong Huang, Wenshuai Huo, Hui Wang, Bing Qin, and Ting Liu. 2023. Enabling Unsupervised Neural Machine Translation with Word-level Visual Representations. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 12608–12618, Singapore. Association for Computational Linguistics.
Cite (Informal):
Enabling Unsupervised Neural Machine Translation with Word-level Visual Representations (Fu et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2023.findings-emnlp.839.pdf