Juan Zambrano
2022
ViLMedic: a framework for research at the intersection of vision and language in medical AI
Jean-benoit Delbrouck
|
Khaled Saab
|
Maya Varma
|
Sabri Eyuboglu
|
Pierre Chambon
|
Jared Dunnmon
|
Juan Zambrano
|
Akshay Chaudhari
|
Curtis Langlotz
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations
There is a growing need to model interactions between data modalities (e.g., vision, language) — both to improve AI predictions on existing tasks and to enable new applications. In the recent field of multimodal medical AI, integrating multiple modalities has gained widespread popularity as multimodal models have proven to improve performance, robustness, require less training samples and add complementary information. To improve technical reproducibility and transparency for multimodal medical tasks as well as speed up progress across medical AI, we present ViLMedic, a Vision-and-Language medical library. As of 2022, the library contains a dozen reference implementations replicating the state-of-the-art results for problems that range from medical visual question answering and radiology report generation to multimodal representation learning on widely adopted medical datasets. In addition, ViLMedic hosts a model-zoo with more than twenty pretrained models for the above tasks designed to be extensible by researchers but also simple for practitioners. Ultimately, we hope our reproducible pipelines can enable clinical translation and create real impact.The library is available at https://github.com/jbdel/vilmedic.
Search
Co-authors
- Jean-Benoit Delbrouck 1
- Khaled Saab 1
- Maya Varma 1
- Sabri Eyuboglu 1
- Pierre Chambon 1
- show all...
Venues
- acl1