Balancing Visual Context Understanding in Dialogue for Image Retrieval

Zhaohui Wei, Lizi Liao, Xiaoyu Du, Xinguang Xiang


Abstract
In the realm of dialogue-to-image retrieval, the primary challenge is to fetch images from a pre-compiled database that accurately reflect the intent embedded within the dialogue history. Existing methods often overemphasize inter-modal alignment, neglecting the nuanced nature of conversational context. Dialogue histories are frequently cluttered with redundant information and often lack direct image descriptions, leading to a substantial disconnect between conversational content and visual representation. This study introduces VCU, a novel framework designed to enhance the comprehension of dialogue history and improve cross-modal matching for image retrieval. VCU leverages large language models (LLMs) to perform a two-step extraction process. It generates precise image-related descriptions from dialogues, while also enhancing visual representation by utilizing object-list texts associated with images. Additionally, auxiliary query collections are constructed to balance the matching process, thereby reducing bias in similarity computations. Experimental results demonstrate that VCU significantly outperforms baseline methods in dialogue-to-image retrieval tasks, highlighting its potential for practical application and effectiveness in bridging the gap between dialogue context and visual content.
Anthology ID:
2024.findings-emnlp.465
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7929–7942
Language:
URL:
https://preview.aclanthology.org/icon-24-ingestion/2024.findings-emnlp.465/
DOI:
10.18653/v1/2024.findings-emnlp.465
Bibkey:
Cite (ACL):
Zhaohui Wei, Lizi Liao, Xiaoyu Du, and Xinguang Xiang. 2024. Balancing Visual Context Understanding in Dialogue for Image Retrieval. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 7929–7942, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Balancing Visual Context Understanding in Dialogue for Image Retrieval (Wei et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/icon-24-ingestion/2024.findings-emnlp.465.pdf
Software:
 2024.findings-emnlp.465.software.zip