Pablo Romero


2025

pdf bib
Medication Extraction and Entity Linking using Stacked and Voted Ensembles on LLMs
Pablo Romero | Lifeng Han | Goran Nenadic
Proceedings of the Second Workshop on Patient-Oriented Language Processing (CL4Health)

pdf bib
The Manchester Bees at PerAnsSumm 2025: Iterative Self-Prompting with Claude and o1 for Perspective-aware Healthcare Answer Summarisation
Pablo Romero | Libo Ren | Lifeng Han | Goran Nenadic
Proceedings of the Second Workshop on Patient-Oriented Language Processing (CL4Health)

pdf bib
INSIGHTBUDDY-AI: Medication Extraction and Entity Linking using Pre-Trained Language Models and Ensemble Learning
Pablo Romero | Lifeng Han | Goran Nenadic
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)

This paper presents our system, InsightBuddy-AI, designed for extracting medication mentions and their associated attributes, and for linking these entities to established clinical terminology resources, including SNOMED-CT, the British National Formulary (BNF), ICD, and the Dictionary of Medicines and Devices (dm+d).To perform medication extraction, we investigated various ensemble learning approaches, including stacked and voting ensembles (using first, average, and max voting methods) built upon eight pre-trained language models (PLMs). These models include general-domain PLMs—BERT, RoBERTa, and RoBERTa-Large—as well as domain-specific models such as BioBERT, BioClinicalBERT, BioMedRoBERTa, ClinicalBERT, and PubMedBERT.The system targets the extraction of drug-related attributes such as adverse drug effects (ADEs), dosage, duration, form, frequency, reason, route, and strength.Experiments conducted on the n2c2-2018 shared task dataset demonstrate that ensemble learning methods outperformed individually fine-tuned models, with notable improvements of 2.43% in Precision and 1.35% in F1-score.We have also developed cross-platform desktop applications for both entity recognition and entity linking, available for Windows and macOS.The InsightBuddy-AI application is freely accessible for research use at https://github.com/HECTA-UoM/InsightBuddy-AI.

pdf bib
Efficient On-Device Text Simplification for Firefox with Synthetic Data Fine-Tuning
Pablo Romero | Zihao Li | Matthew Shardlow
Proceedings of the Fourth Workshop on Text Simplification, Accessibility and Readability (TSAR 2025)

This work presents a system for on-device text simplification that enables users to process sensitive text without relying on cloud-based services. Through the use of quantization techniques and a novel approach to controllable text simplification we reduce model size by up to 75 percent with minimal performance degradation. Our models demonstrate efficient state-of-the-art results using a synthetic dataset of 2909 examples outperforming prior work trained on 300K examples. This efficiency stems from (1) a single control token strategy that precisely targets specific reading levels (2) a contrastive training approach that enriches model understanding through exposure to multiple simplification levels and (3) individual models that dedicate full parameter capacity to specific reading level transformations. Our best models achieve up to 82.18 BLEU at the Advanced level and 46.12 SARI at the Elementary level on standard benchmarks with performance preserved even after aggressive quantization. This work is implemented as a collaboration with the Mozilla AI team to process text entirely locally ensuring sensitive information never leaves the users device. We have a demonstration video https//youtu.be/TzmaxnARMzg and a web demo available at https//pablorom2004.github.io/Simplification-Web-Demo