Enhancing Clinical Trial Analysis through Large Language Models for Multi-Evidence Natural Language Inference

Shobanapriyan Chandrasegaran, Amal Htait


Abstract
The exponential growth of clinical trial reports (CTRs) presents a critical challenge for evidence-based medicine, with manual systematic reviews requiring months to synthesise findings. This paper evaluates Large Language Models (LLMs) and retrieval methods for automated Natural Language Inference (NLI) and evidence extraction from CTRs, and seeks to improve upon previously reported results in this domain. Using the NLI4CT dataset containing 2,400 annotated statement-evidence pairs from breast cancer trials, we conducted a comparative evaluation of general-purpose LLMs, domain-specific LLMs, and transformer-based baselines across entailment classification and evidence retrieval tasks. Reasoning-capable, general-purpose LLMs (such as Qwen-32B) demonstrated superior performance in the entailment classification task, exceeding both the performance of other models evaluated in this study and the previously reported state-of-the-art results. Although domain-specific adaptations showed improvements at comparable scale, larger general-purpose language models maintained superior absolute performance. For evidence retrieval, Large Language embedding models (such as bge-large-en-v1.5) surpassed classic transformer-based ranking approaches. These findings demonstrate that modern LLMs with reasoning capabilities can effectively support real-time clinical evidence synthesis without task-specific fine-tuning, offering a pathway toward scalable automated systems for clinical trial interpretation that could substantially reduce the evidence-to-practice gap in medical decision-making.
Anthology ID:
2026.lrec-main.36
Volume:
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Month:
May
Year:
2026
Address:
Palma de Mallorca, Spain
Editors:
Stelios Piperidis, Núria Bel, Henk van den Heuvel, Nancy Ide, Simon Krek, Antonio Toral
Venue:
LREC
SIG:
Publisher:
ELRA Language Resource Association
Note:
Pages:
515–523
Language:
URL:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.36/
DOI:
Bibkey:
Cite (ACL):
Shobanapriyan Chandrasegaran and Amal Htait. 2026. Enhancing Clinical Trial Analysis through Large Language Models for Multi-Evidence Natural Language Inference. International Conference on Language Resources and Evaluation, main:515–523.
Cite (Informal):
Enhancing Clinical Trial Analysis through Large Language Models for Multi-Evidence Natural Language Inference (Chandrasegaran & Htait, LREC 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.36.pdf