Abstract
We revisit a particular visual grounding method: the “Image Retrieval Using Scene Graphs” (IRSG) system of Johnson et al. Our experiments indicate that the system does not effectively use its learned object-relationship models. We also look closely at the IRSG dataset, as well as the widely used Visual Relationship Dataset (VRD) that is adapted from it. We find that these datasets exhibit bias that allows methods that ignore relationships to perform relatively well. We also describe several other problems with the IRSG dataset, and report on experiments using a subset of the dataset in which the biases and other problems are removed. Our studies contribute to a more general effort: that of better understanding what machine-learning methods that combine language and vision actually learn and what popular datasets actually test.- Anthology ID:
- W19-1804
- Volume:
- Proceedings of the Second Workshop on Shortcomings in Vision and Language
- Month:
- June
- Year:
- 2019
- Address:
- Minneapolis, Minnesota
- Editors:
- Raffaella Bernardi, Raquel Fernandez, Spandana Gella, Kushal Kafle, Christopher Kanan, Stefan Lee, Moin Nabi
- Venue:
- NAACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 37–46
- Language:
- URL:
- https://aclanthology.org/W19-1804
- DOI:
- 10.18653/v1/W19-1804
- Cite (ACL):
- Erik Conser, Kennedy Hahn, Chandler Watson, and Melanie Mitchell. 2019. Revisiting Visual Grounding. In Proceedings of the Second Workshop on Shortcomings in Vision and Language, pages 37–46, Minneapolis, Minnesota. Association for Computational Linguistics.
- Cite (Informal):
- Revisiting Visual Grounding (Conser et al., NAACL 2019)
- PDF:
- https://preview.aclanthology.org/proper-vol2-ingestion/W19-1804.pdf
- Data
- VRD