RAMQA: A Unified Framework for Retrieval-Augmented Multi-Modal Question Answering

Yang Bai, Christan Grant, Daisy Zhe Wang


Abstract
Multi-modal retrieval-augmented Question Answering (MRAQA), integrating text and images, has gained significant attention in information retrieval (IR) and natural language processing (NLP). Traditional ranking methods rely on small encoder-based language models, which are incompatible with modern decoder-based generative large language models (LLMs) that have advanced various NLP tasks. To bridge this gap, we propose RAMQA, a unified framework combining learning-to-rank methods with generative permutation-enhanced ranking techniques. We first train a pointwise multi-modal ranker using LLaVA as the backbone. Then, we apply instruction tuning to train a LLaMA model for re-ranking the top-k documents using an innovative autoregressive multi-task learning approach. Our generative ranking model generates re-ranked document IDs and specific answers from document candidates in various permutations. Experiments on two MRAQA benchmarks, WebQA and MultiModalQA, show significant improvements over strong baselines, highlighting the effectiveness of our approach. Data and code will be made public once the paper is accepted.
Anthology ID:
2025.findings-naacl.60
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1061–1076
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.60/
DOI:
Bibkey:
Cite (ACL):
Yang Bai, Christan Grant, and Daisy Zhe Wang. 2025. RAMQA: A Unified Framework for Retrieval-Augmented Multi-Modal Question Answering. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 1061–1076, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
RAMQA: A Unified Framework for Retrieval-Augmented Multi-Modal Question Answering (Bai et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.60.pdf