Abstract
In this paper, we investigate the impact of objects on gender bias in image captioning systems. Our results show that only gender-specific objects have a strong gender bias (e.g., women-lipstick). In addition, we propose a visual semantic-based gender score that measures the degree of bias and can be used as a plug-in for any image captioning system. Our experiments demonstrate the utility of the gender score, since we observe that our score can measure the bias relation between a caption and its related gender; therefore, our score can be used as an additional metric to the existing Object Gender Co-Occ approach.- Anthology ID:
- 2023.findings-emnlp.279
- Original:
- 2023.findings-emnlp.279v1
- Version 2:
- 2023.findings-emnlp.279v2
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2023
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4234–4240
- Language:
- URL:
- https://aclanthology.org/2023.findings-emnlp.279
- DOI:
- Cite (ACL):
- Ahmed Sabir and Lluís Padró. 2023. Women Wearing Lipstick: Measuring the Bias Between an Object and Its Related Gender. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 4234–4240, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- Women Wearing Lipstick: Measuring the Bias Between an Object and Its Related Gender (Sabir & Padró, Findings 2023)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2023.findings-emnlp.279.pdf