MURAL: Multimodal, Multitask Representations Across Languages
Aashi Jain, Mandy Guo, Krishna Srinivasan, Ting Chen, Sneha Kudugunta, Chao Jia, Yinfei Yang, Jason Baldridge
Abstract
Both image-caption pairs and translation pairs provide the means to learn deep representations of and connections between languages. We use both types of pairs in MURAL (MUltimodal, MUltitask Representations Across Languages), a dual encoder that solves two tasks: 1) image-text matching and 2) translation pair matching. By incorporating billions of translation pairs, MURAL extends ALIGN (Jia et al.)–a state-of-the-art dual encoder learned from 1.8 billion noisy image-text pairs. When using the same encoders, MURAL’s performance matches or exceeds ALIGN’s cross-modal retrieval performance on well-resourced languages across several datasets. More importantly, it considerably improves performance on under-resourced languages, showing that text-text learning can overcome a paucity of image-caption examples for these languages. On the Wikipedia Image-Text dataset, for example, MURAL-base improves zero-shot mean recall by 8.1% on average for eight under-resourced languages and by 6.8% on average when fine-tuning. We additionally show that MURAL’s text representations cluster not only with respect to genealogical connections but also based on areal linguistics, such as the Balkan Sprachbund.- Anthology ID:
- 2021.findings-emnlp.293
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2021
- Month:
- November
- Year:
- 2021
- Address:
- Punta Cana, Dominican Republic
- Editors:
- Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
- Venue:
- Findings
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3449–3463
- Language:
- URL:
- https://preview.aclanthology.org/add_missing_videos/2021.findings-emnlp.293/
- DOI:
- 10.18653/v1/2021.findings-emnlp.293
- Cite (ACL):
- Aashi Jain, Mandy Guo, Krishna Srinivasan, Ting Chen, Sneha Kudugunta, Chao Jia, Yinfei Yang, and Jason Baldridge. 2021. MURAL: Multimodal, Multitask Representations Across Languages. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3449–3463, Punta Cana, Dominican Republic. Association for Computational Linguistics.
- Cite (Informal):
- MURAL: Multimodal, Multitask Representations Across Languages (Jain et al., Findings 2021)
- PDF:
- https://preview.aclanthology.org/add_missing_videos/2021.findings-emnlp.293.pdf
- Data
- CxC, Flickr30k, MS COCO, WIT