Robust Neural Machine Translation with Joint Textual and Phonetic Embedding

Hairong Liu, Mingbo Ma, Liang Huang, Hao Xiong, Zhongjun He


Abstract
Neural machine translation (NMT) is notoriously sensitive to noises, but noises are almost inevitable in practice. One special kind of noise is the homophone noise, where words are replaced by other words with similar pronunciations. We propose to improve the robustness of NMT to homophone noises by 1) jointly embedding both textual and phonetic information of source sentences, and 2) augmenting the training dataset with homophone noises. Interestingly, to achieve better translation quality and more robustness, we found that most (though not all) weights should be put on the phonetic rather than textual information. Experiments show that our method not only significantly improves the robustness of NMT to homophone noises, but also surprisingly improves the translation quality on some clean test sets.
Anthology ID:
P19-1291
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3044–3049
Language:
URL:
https://aclanthology.org/P19-1291
DOI:
10.18653/v1/P19-1291
Bibkey:
Cite (ACL):
Hairong Liu, Mingbo Ma, Liang Huang, Hao Xiong, and Zhongjun He. 2019. Robust Neural Machine Translation with Joint Textual and Phonetic Embedding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3044–3049, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Robust Neural Machine Translation with Joint Textual and Phonetic Embedding (Liu et al., ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-dup-bibkey/P19-1291.pdf