2024
pdf
abs
MAIRA at RRG24: A specialised large multimodal model for radiology report generation
Shaury Srivastav
|
Mercy Ranjit
|
Fernando Pérez-García
|
Kenza Bouzid
|
Shruthi Bannur
|
Daniel C. Castro
|
Anton Schwaighofer
|
Harshita Sharma
|
Maximilian Ilse
|
Valentina Salvatelli
|
Sam Bond-Taylor
|
Fabian Falck
|
Anja Thieme
|
Hannah Richardson
|
Matthew P. Lungren
|
Stephanie L. Hyland
|
Javier Alvarez-Valle
Proceedings of the 23rd Workshop on Biomedical Natural Language Processing
This paper discusses the participation of the MSR MAIRA team in the Large-Scale Radiology Report Generation Shared Task Challenge, as part of the BioNLP workshop at ACL 2024. We present a radiology-specific multimodal model designed to generate radiological reports from chest X-Rays (CXRs). Our proposed model combines a CXR-specific image encoder RAD-DINO with a Large Language Model (LLM) based on Vicuna-7B, via a multi-layer perceptron (MLP) adapter. Both the adapter and the LLM have been fine-tuned in a single-stage training setup to generate radiology reports. Experimental results indicate that a joint training setup with findings and impression sections improves findings prediction. Additionally, incorporating lateral images alongside frontal images when available further enhances all metrics. More information and resources about MAIRA can be found on the project website: http://aka.ms/maira.
2023
pdf
abs
Compositional Zero-Shot Domain Transfer with Text-to-Text Models
Fangyu Liu
|
Qianchu Liu
|
Shruthi Bannur
|
Fernando Pérez-García
|
Naoto Usuyama
|
Sheng Zhang
|
Tristan Naumann
|
Aditya Nori
|
Hoifung Poon
|
Javier Alvarez-Valle
|
Ozan Oktay
|
Stephanie L. Hyland
Transactions of the Association for Computational Linguistics, Volume 11
Label scarcity is a bottleneck for improving task performance in specialized domains. We propose a novel compositional transfer learning framework (DoT51) for zero-shot domain transfer. Without access to in-domain labels, DoT5 jointly learns domain knowledge (from masked language modelling of unlabelled in-domain free text) and task knowledge (from task training on more readily available general-domain data) in a multi-task manner. To improve the transferability of task training, we design a strategy named NLGU: We simultaneously train natural language generation (NLG) for in-domain label-to-data generation, which enables data augmentation for self-finetuning and natural language understanding (NLU) for label prediction. We evaluate DoT5 on the biomedical domain and the resource-lean subdomain of radiology, focusing on natural language inference, text summarization, and embedding learning. DoT5 demonstrates the effectiveness of compositional transfer learning through multi-task learning. In particular, DoT5 outperforms the current state-of-the-art in zero-shot transfer by over 7 absolute points in accuracy on RadNLI. We validate DoT5 with ablations and a case study demonstrating its ability to solve challenging NLI examples requiring in-domain expertise.