DIVE: Towards Descriptive and Diverse Visual Commonsense Generation
Jun-Hyung Park, Hyuntae Park, Youjin Kang, Eojin Jeon, SangKeun Lee
Abstract
Towards human-level visual understanding, visual commonsense generation has been introduced to generate commonsense inferences beyond images. However, current research on visual commonsense generation has overlooked an important human cognitive ability: generating descriptive and diverse inferences. In this work, we propose a novel visual commonsense generation framework, called DIVE, which aims to improve the descriptiveness and diversity of generated inferences. DIVE involves two methods, generic inference filtering and contrastive retrieval learning, which address the limitations of existing visual commonsense resources and training objectives. Experimental results verify that DIVE outperforms state-of-the-art models for visual commonsense generation in terms of both descriptiveness and diversity, while showing a superior quality in generating unique and novel inferences. Notably, DIVE achieves human-level descriptiveness and diversity on Visual Commonsense Graphs. Furthermore, human evaluations confirm that DIVE aligns closely with human judgments on descriptiveness and diversity.- Anthology ID:
- 2023.emnlp-main.601
- Volume:
- Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 9677–9695
- Language:
- URL:
- https://aclanthology.org/2023.emnlp-main.601
- DOI:
- 10.18653/v1/2023.emnlp-main.601
- Cite (ACL):
- Jun-Hyung Park, Hyuntae Park, Youjin Kang, Eojin Jeon, and SangKeun Lee. 2023. DIVE: Towards Descriptive and Diverse Visual Commonsense Generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9677–9695, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- DIVE: Towards Descriptive and Diverse Visual Commonsense Generation (Park et al., EMNLP 2023)
- PDF:
- https://preview.aclanthology.org/add_acl24_videos/2023.emnlp-main.601.pdf