Zero-Shot Information Extraction for Clinical Meta-Analysis using Large Language Models

David Kartchner, Selvi Ramalingam, Irfan Al-Hussaini, Olivia Kronick, Cassie Mitchell


Abstract
Meta-analysis of randomized clinical trials (RCTs) plays a crucial role in evidence-based medicine but can be labor-intensive and error-prone. This study explores the use of large language models to enhance the efficiency of aggregating results from randomized clinical trials (RCTs) at scale. We perform a detailed comparison of the performance of these models in zero-shot prompt-based information extraction from a diverse set of RCTs to traditional manual annotation methods. We analyze the results for two different meta-analyses aimed at drug repurposing in cancer therapy pharmacovigilience in chronic myeloid leukemia. Our findings reveal that the best model for the two demonstrated tasks, ChatGPT can generally extract correct information and identify when the desired information is missing from an article. We additionally conduct a systematic error analysis, documenting the prevalence of diverse error types encountered during the process of prompt-based information extraction.
Anthology ID:
2023.bionlp-1.37
Volume:
The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Dina Demner-fushman, Sophia Ananiadou, Kevin Cohen
Venue:
BioNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
396–405
Language:
URL:
https://aclanthology.org/2023.bionlp-1.37
DOI:
10.18653/v1/2023.bionlp-1.37
Bibkey:
Cite (ACL):
David Kartchner, Selvi Ramalingam, Irfan Al-Hussaini, Olivia Kronick, and Cassie Mitchell. 2023. Zero-Shot Information Extraction for Clinical Meta-Analysis using Large Language Models. In The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, pages 396–405, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Zero-Shot Information Extraction for Clinical Meta-Analysis using Large Language Models (Kartchner et al., BioNLP 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2023.bionlp-1.37.pdf