Abstract
Machine translation models have discrete vocabularies and commonly use subword segmentation techniques to achieve an ‘open vocabulary.’ This approach relies on consistent and correct underlying unicode sequences, and makes models susceptible to degradation from common types of noise and variation. Motivated by the robustness of human language processing, we propose the use of visual text representations, which dispense with a finite set of text embeddings in favor of continuous vocabularies created by processing visually rendered text with sliding windows. We show that models using visual text representations approach or match performance of traditional text models on small and larger datasets. More importantly, models with visual embeddings demonstrate significant robustness to varied types of noise, achieving e.g., 25.9 BLEU on a character permuted German–English task where subword models degrade to 1.9.- Anthology ID:
- 2021.emnlp-main.576
- Volume:
- Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
- Month:
- November
- Year:
- 2021
- Address:
- Online and Punta Cana, Dominican Republic
- Editors:
- Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 7235–7252
- Language:
- URL:
- https://aclanthology.org/2021.emnlp-main.576
- DOI:
- 10.18653/v1/2021.emnlp-main.576
- Cite (ACL):
- Elizabeth Salesky, David Etter, and Matt Post. 2021. Robust Open-Vocabulary Translation from Visual Text Representations. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7235–7252, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
- Cite (Informal):
- Robust Open-Vocabulary Translation from Visual Text Representations (Salesky et al., EMNLP 2021)
- PDF:
- https://preview.aclanthology.org/proper-vol2-ingestion/2021.emnlp-main.576.pdf
- Code
- esalesky/visrep
- Data
- MTNT