Dan Oneaţă
Also published as: Dan Oneață
2024
Visually Grounded Speech Models Have a Mutual Exclusivity Bias
Leanne Nortje
|
Dan Oneaţă
|
Yevgen Matusevych
|
Herman Kamper
Transactions of the Association for Computational Linguistics, Volume 12
When children learn new words, they employ constraints such as the mutual exclusivity (ME) bias: A novel word is mapped to a novel object rather than a familiar one. This bias has been studied computationally, but only in models that use discrete word representations as input, ignoring the high variability of spoken words. We investigate the ME bias in the context of visually grounded speech models that learn from natural images and continuous speech audio. Concretely, we train a model on familiar words and test its ME bias by asking it to select between a novel and a familiar object when queried with a novel word. To simulate prior acoustic and visual knowledge, we experiment with several initialization strategies using pretrained speech and vision networks. Our findings reveal the ME bias across the different initialization approaches, with a stronger bias in models with more prior (in particular, visual) knowledge. Additional tests confirm the robustness of our results, even when different loss functions are considered. Based on detailed analyses to piece out the model’s representation space, we attribute the ME bias to how familiar and novel classes are distinctly separated in the resulting space.
2022
Multilingual Multimodal Learning with Machine Translated Text
Chen Qiu
|
Dan Oneață
|
Emanuele Bugliarello
|
Stella Frank
|
Desmond Elliott
Findings of the Association for Computational Linguistics: EMNLP 2022
Most vision-and-language pretraining research focuses on English tasks. However, the creation of multilingual multimodal evaluation datasets (e.g. Multi30K, xGQA, XVNLI, and MaRVL) poses a new challenge in finding high-quality training data that is both multilingual and multimodal. In this paper, we investigate whether machine translating English multimodal data can be an effective proxy for the lack of readily available multilingual data. We call this framework TD-MML: Translated Data for Multilingual Multimodal Learning, and it can be applied to any multimodal dataset and model. We apply it to both pretraining and fine-tuning data with a state-of-the-art model. In order to prevent models from learning from low-quality translated text, we propose two metrics for automatically removing such translations from the resulting datasets. In experiments on five tasks across 20 languages in the IGLUE benchmark, we show that translated data can provide a useful signal for multilingual multimodal learning, both at pretraining and fine-tuning.
Search
Co-authors
- Leanne Nortje 1
- Yevgen Matusevych 1
- Herman Kamper 1
- Chen Qiu 1
- Emanuele Bugliarello 1
- show all...