Pre-training Cross-Modal Retrieval by Expansive Lexicon-Patch Alignment
Yang Yiyuan, Guodong Long, Michael Blumenstein, Xiubo Geng, Chongyang Tao, Tao Shen, Daxin Jiang
Abstract
Recent large-scale vision-language pre-training depends on image-text global alignment by contrastive learning and is further boosted by fine-grained alignment in a weakly contrastive manner for cross-modal retrieval. Nonetheless, besides semantic matching learned by contrastive learning, cross-modal retrieval also largely relies on object matching between modalities. This necessitates fine-grained categorical discriminative learning, which however suffers from scarce data in full-supervised scenarios and information asymmetry in weakly-supervised scenarios when applied to cross-modal retrieval. To address these issues, we propose expansive lexicon-patch alignment (ELA) to align image patches with a vocabulary rather than only the words explicitly in the text for annotation-free alignment and information augmentation, thus enabling more effective fine-grained categorical discriminative learning for cross-modal retrieval. Experimental results show that ELA could effectively learn representative fine-grained information and outperform state-of-the-art methods on cross-modal retrieval.- Anthology ID:
- 2024.lrec-main.1136
- Volume:
- Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
- Month:
- May
- Year:
- 2024
- Address:
- Torino, Italia
- Editors:
- Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
- Venues:
- LREC | COLING
- SIG:
- Publisher:
- ELRA and ICCL
- Note:
- Pages:
- 12977–12987
- Language:
- URL:
- https://aclanthology.org/2024.lrec-main.1136
- DOI:
- Cite (ACL):
- Yang Yiyuan, Guodong Long, Michael Blumenstein, Xiubo Geng, Chongyang Tao, Tao Shen, and Daxin Jiang. 2024. Pre-training Cross-Modal Retrieval by Expansive Lexicon-Patch Alignment. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 12977–12987, Torino, Italia. ELRA and ICCL.
- Cite (Informal):
- Pre-training Cross-Modal Retrieval by Expansive Lexicon-Patch Alignment (Yiyuan et al., LREC-COLING 2024)
- PDF:
- https://preview.aclanthology.org/proper-vol2-ingestion/2024.lrec-main.1136.pdf