VP-MEL: Visual Prompts Guided Multimodal Entity Linking

Hongze Mi, Jinyuan Li, Zhangxuying Zhangxuying, Haoran Cheng, Jiahao Wang, Di Sun, Gang Pan


Abstract
Multimodal entity linking (MEL), a task aimed at linking mentions within multimodal contexts to their corresponding entities in a knowledge base (KB), has attracted much attention due to its wide applications in recent years. However, existing MEL methods often rely on mention words as retrieval cues, which limits their ability to effectively utilize information from both images and text. This reliance causes MEL to struggle with accurately retrieving entities in certain scenarios, especially when the focus is on image objects or mention words are missing from the text. To solve these issues, we introduce a Visual Prompts guided Multimodal Entity Linking (VP-MEL) task. Given a text-image pair, VP-MEL aims to link a marked region (i.e., visual prompt) in an image to its corresponding entities in the knowledge base. To facilitate this task, we present a new dataset, VPWiki, specifically designed for VP-MEL. Furthermore, we propose a framework named IIER, which enhances visual feature extraction using visual prompts and leverages the pre-trained Detective-VLM model to capture latent information. Experimental results on the VPWiki dataset demonstrate that IIER outperforms baseline methods across multiple benchmarks for the VP-MEL task.
Anthology ID:
2025.findings-acl.880
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17122–17137
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-acl.880/
DOI:
Bibkey:
Cite (ACL):
Hongze Mi, Jinyuan Li, Zhangxuying Zhangxuying, Haoran Cheng, Jiahao Wang, Di Sun, and Gang Pan. 2025. VP-MEL: Visual Prompts Guided Multimodal Entity Linking. In Findings of the Association for Computational Linguistics: ACL 2025, pages 17122–17137, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
VP-MEL: Visual Prompts Guided Multimodal Entity Linking (Mi et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-acl.880.pdf