Abstract
Multimodal language generation, which leverages the synergy of language and vision, is a rapidly expanding field. However, existing vision-language models face challenges in tasks that require complex linguistic understanding. To address this issue, we introduce Visual-Language models as Importance Sampling weights (VLIS), a novel framework that combines the visual conditioning capability of vision-language models with the language understanding of unimodal text-only language models without further training. It extracts pointwise mutual information of each image and text from a visual-language model and uses the value as an importance sampling weight to adjust the token likelihood from a text-only model. VLIS improves vision-language models on diverse tasks, including commonsense understanding (WHOOPS, OK-VQA, and ScienceQA) and complex text generation (Concadia, Image Paragraph Captioning, and ROCStories). Our results suggest that VLIS represents a promising new direction for multimodal language generation.- Anthology ID:
- 2023.emnlp-main.46
- Volume:
- Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 700–721
- Language:
- URL:
- https://aclanthology.org/2023.emnlp-main.46
- DOI:
- 10.18653/v1/2023.emnlp-main.46
- Cite (ACL):
- Jiwan Chung and Youngjae Yu. 2023. VLIS: Unimodal Language Models Guide Multimodal Language Generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 700–721, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- VLIS: Unimodal Language Models Guide Multimodal Language Generation (Chung & Yu, EMNLP 2023)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-3/2023.emnlp-main.46.pdf