Can images help recognize entities? A study of the role of images for Multimodal NER
Shuguang Chen, Gustavo Aguilar, Leonardo Neves, Thamar Solorio
Abstract
Multimodal named entity recognition (MNER) requires to bridge the gap between language understanding and visual context. While many multimodal neural techniques have been proposed to incorporate images into the MNER task, the model’s ability to leverage multimodal interactions remains poorly understood. In this work, we conduct in-depth analyses of existing multimodal fusion techniques from different perspectives and describe the scenarios where adding information from the image does not always boost performance. We also study the use of captions as a way to enrich the context for MNER. Experiments on three datasets from popular social platforms expose the bottleneck of existing multimodal models and the situations where using captions is beneficial.- Anthology ID:
- 2021.wnut-1.11
- Volume:
- Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)
- Month:
- November
- Year:
- 2021
- Address:
- Online
- Venue:
- WNUT
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 87–96
- Language:
- URL:
- https://aclanthology.org/2021.wnut-1.11
- DOI:
- 10.18653/v1/2021.wnut-1.11
- Cite (ACL):
- Shuguang Chen, Gustavo Aguilar, Leonardo Neves, and Thamar Solorio. 2021. Can images help recognize entities? A study of the role of images for Multimodal NER. In Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021), pages 87–96, Online. Association for Computational Linguistics.
- Cite (Informal):
- Can images help recognize entities? A study of the role of images for Multimodal NER (Chen et al., WNUT 2021)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2021.wnut-1.11.pdf
- Code
- RiTUAL-UH/multimodal_NER