Abstract
We propose a methodology and design two benchmark sets for measuring to what extent language-and-vision language models use the visual signal in the presence or absence of stereotypes. The first benchmark is designed to test for stereotypical colors of common objects, while the second benchmark considers gender stereotypes. The key idea is to compare predictions when the image conforms to the stereotype to predictions when it does not. Our results show that there is significant variation among multimodal models: the recent Transformer-based FLAVA seems to be more sensitive to the choice of image and less affected by stereotypes than older CNN-based models such as VisualBERT and LXMERT. This effect is more discernible in this type of controlled setting than in traditional evaluations where we do not know whether the model relied on the stereotype or the visual signal.- Anthology ID:
- 2022.blackboxnlp-1.21
- Volume:
- Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates (Hybrid)
- Editors:
- Jasmijn Bastings, Yonatan Belinkov, Yanai Elazar, Dieuwke Hupkes, Naomi Saphra, Sarah Wiegreffe
- Venue:
- BlackboxNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 263–271
- Language:
- URL:
- https://aclanthology.org/2022.blackboxnlp-1.21
- DOI:
- 10.18653/v1/2022.blackboxnlp-1.21
- Cite (ACL):
- Manuj Malik and Richard Johansson. 2022. Controlling for Stereotypes in Multimodal Language Model Evaluation. In Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 263–271, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
- Cite (Informal):
- Controlling for Stereotypes in Multimodal Language Model Evaluation (Malik & Johansson, BlackboxNLP 2022)
- PDF:
- https://preview.aclanthology.org/fix-volume-bibkeys/2022.blackboxnlp-1.21.pdf