Abstract
There is a scarcity of multilingual vision-language models that properly account for the perceptual differences that are reflected in image captions across languages and cultures. In this work, through a multimodal, multilingual retrieval case study, we quantify the existing lack of model flexibility. We empirically show performance gaps between training on captions that come from native German perception and captions that have been either machine-translated or human-translated from English into German. To address these gaps, we further propose and evaluate caption augmentation strategies. While we achieve mean recall improvements (+1.3), gaps still remain, indicating an open area of future work for the community.- Anthology ID:
- 2024.emnlp-main.335
- Volume:
- Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
- Month:
- November
- Year:
- 2024
- Address:
- Miami, Florida, USA
- Editors:
- Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 5863–5870
- Language:
- URL:
- https://aclanthology.org/2024.emnlp-main.335
- DOI:
- 10.18653/v1/2024.emnlp-main.335
- Cite (ACL):
- Kyle Buettner and Adriana Kovashka. 2024. Quantifying the Gaps Between Translation and Native Perception in Training for Multimodal, Multilingual Retrieval. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 5863–5870, Miami, Florida, USA. Association for Computational Linguistics.
- Cite (Informal):
- Quantifying the Gaps Between Translation and Native Perception in Training for Multimodal, Multilingual Retrieval (Buettner & Kovashka, EMNLP 2024)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2024.emnlp-main.335.pdf