Zirui Li
2025
DocMMIR: A Framework for Document Multi-modal Information Retrieval
Zirui Li
|
Siwei Wu
|
Yizhi Li
|
Xingyu Wang
|
Yi Zhou
|
Chenghua Lin
Findings of the Association for Computational Linguistics: EMNLP 2025
The rapid advancement of unsupervised representation learning and large-scale pre-trained vision-language models has significantly improved cross-modal retrieval tasks. However, existing multi-modal information retrieval (MMIR) studies lack a comprehensive exploration of document-level retrieval and suffer from the absence of cross-domain datasets at this granularity. To address this limitation, we introduce DocMMIR, a novel multi-modal document retrieval framework designed explicitly to unify diverse document formats and domains—including Wikipedia articles, scientific papers (arXiv), and presentation slides—within a comprehensive retrieval scenario. We construct a large-scale cross-domain multimodal dataset, comprising 450K training, 19.2K validation, and 19.2K test documents, serving as both a benchmark to reveal the shortcomings of existing MMIR models and a training set for further improvement. The dataset systematically integrates textual and visual information. Our comprehensive experimental analysis reveals substantial limitations in current state-of-the-art MLLMs (CLIP, BLIP2, SigLIP-2, ALIGN) when applied to our tasks, with only CLIP (ViT-L/14) demonstrating reasonable zero-shot performance. Through systematic investigation of cross-modal fusion strategies and loss function selection on the CLIP (ViT-L/14) model, we develop an optimised approach that achieves a +31% improvement in MRR@10 metrics from zero-shot baseline to fine-tuned model. Our findings offer crucial insights and practical guidance for future development in unified multimodal document retrieval tasks.
2019
Choosing Transfer Languages for Cross-Lingual Learning
Yu-Hsiang Lin
|
Chian-Yu Chen
|
Jean Lee
|
Zirui Li
|
Yuyan Zhang
|
Mengzhou Xia
|
Shruti Rijhwani
|
Junxian He
|
Zhisong Zhang
|
Xuezhe Ma
|
Antonios Anastasopoulos
|
Patrick Littell
|
Graham Neubig
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Cross-lingual transfer, where a high-resource transfer language is used to improve the accuracy of a low-resource task language, is now an invaluable tool for improving performance of natural language processing (NLP) on low-resource languages. However, given a particular task language, it is not clear which language to transfer from, and the standard strategy is to select languages based on ad hoc criteria, usually the intuition of the experimenter. Since a large number of features contribute to the success of cross-lingual transfer (including phylogenetic similarity, typological properties, lexical overlap, or size of available data), even the most enlightened experimenter rarely considers all these factors for the particular task at hand. In this paper, we consider this task of automatically selecting optimal transfer languages as a ranking problem, and build models that consider the aforementioned features to perform this prediction. In experiments on representative NLP tasks, we demonstrate that our model predicts good transfer languages much better than ad hoc baselines considering single features in isolation, and glean insights on what features are most informative for each different NLP tasks, which may inform future ad hoc selection even without use of our method.
Search
Fix author
Co-authors
- Antonios Anastasopoulos 1
- Chian-Yu Chen 1
- Junxian He 1
- Jean Lee 1
- Yizhi Li 1
- show all...