Query Generation for Multimodal Documents

Kyungho Kim, Kyungjae Lee, Seung-won Hwang, Young-In Song, Seungwook Lee


Abstract
This paper studies the problem of generatinglikely queries for multimodal documents withimages. Our application scenario is enablingefficient “first-stage retrieval” of relevant doc-uments, by attaching generated queries to doc-uments before indexing. We can then indexthis expanded text to efficiently narrow downto candidate matches using inverted index, sothat expensive reranking can follow. Our eval-uation results show that our proposed multi-modal representation meaningfully improvesrelevance ranking. More importantly, ourframework can achieve the state of the art inthe first stage retrieval scenarios
Anthology ID:
2021.eacl-main.54
Volume:
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Month:
April
Year:
2021
Address:
Online
Editors:
Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
659–668
Language:
URL:
https://aclanthology.org/2021.eacl-main.54
DOI:
10.18653/v1/2021.eacl-main.54
Bibkey:
Cite (ACL):
Kyungho Kim, Kyungjae Lee, Seung-won Hwang, Young-In Song, and Seungwook Lee. 2021. Query Generation for Multimodal Documents. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 659–668, Online. Association for Computational Linguistics.
Cite (Informal):
Query Generation for Multimodal Documents (Kim et al., EACL 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2021.eacl-main.54.pdf