Shanavas Cholakkal
2025
BiMediX2 : Bio-Medical EXpert LMM for Diverse Medical Modalities
Sahal Shaji Mullappilly
|
Mohammed Irfan Kurpath
|
Sara Pieri
|
Saeed Yahya Alseiari
|
Shanavas Cholakkal
|
Khaled M Aldahmani
|
Fahad Shahbaz Khan
|
Rao Muhammad Anwer
|
Salman Khan
|
Timothy Baldwin
|
Hisham Cholakkal
Findings of the Association for Computational Linguistics: EMNLP 2025
We introduce BiMediX2, a bilingual (Arabic-English) Bio-Medical EXpert Large Multimodal Model that supports text-based and image-based medical interactions. It enables multi-turn conversation in Arabic and English and supports diverse medical imaging modalities, including radiology, CT, and histology. To train BiMediX2, we curate BiMed-V, an extensive Arabic-English bilingual healthcare dataset consisting of 1.6M samples of diverse medical interactions. This dataset supports a range of medical Large Language Model (LLM) and Large Multimodal Model (LMM) tasks, including multi-turn medical conversations, report generation, and visual question answering (VQA). We also introduce BiMed-MBench, the first Arabic-English medical LMM evaluation benchmark, verified by medical experts. BiMediX2 demonstrates excellent performance across multiple medical LLM and LMM benchmarks, achieving state-of-the-art results compared to other open-sourced models. On BiMed-MBench, BiMediX2 outperforms existing methods by over 9% in English and more than 20% in Arabic evaluations. Additionally, it surpasses GPT-4 by approximately 9% in UPHILL factual accuracy evaluations and excels in various medical VQA, report generation, and report summarization tasks. Our trained models, instruction set, and source code are available at - https://github.com/mbzuai-oryx/BiMediX2