Avani Gupta


2025

pdf bib
Building Trust in Clinical LLMs: Bias Analysis and Dataset Transparency
Svetlana Maslenkova | Clement Christophe | Marco AF Pimentel | Tathagata Raha | Muhammad Umar Salman | Ahmed Al Mahrooqi | Avani Gupta | Shadab Khan | Ronnie Rajan | Praveenkumar Kanithi
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Large language models offer transformative potential for healthcare, yet their responsible and equitable development depends critically on a deeper understanding of how training data characteristics influence model behavior, including the potential for bias. Current practices in dataset curation and bias assessment often lack the necessary transparency, creating an urgent need for comprehensive evaluation frameworks to foster trust and guide improvements. In this study, we present an in-depth analysis of potential downstream biases in clinical language models, with a focus on differential opioid prescription tendencies across diverse demographic groups, such as ethnicity, gender, and age. As part of this investigation, we introduce HC4: Healthcare Comprehensive Commons Corpus, a novel and extensively curated pretraining dataset exceeding 89 billion tokens. Our evaluation leverages both established general benchmarks and a novel, healthcare-specific methodology, offering crucial insights to support fairness and safety in clinical AI applications.

2022

pdf bib
CitRet: A Hybrid Model for Cited Text Span Retrieval
Amit Pandey | Avani Gupta | Vikram Pudi
Proceedings of the 29th International Conference on Computational Linguistics

The paper aims to identify cited text spans in the reference paper related to the given citance in the citing paper. We refer to it as cited text span retrieval (CTSR). Most current methods attempt this task by relying on pre-trained off-the-shelf deep learning models like SciBERT. Though these models are pre-trained on large datasets, they under-perform in out-of-domain settings. We introduce CitRet, a novel hybrid model for CTSR that leverages unique semantic and syntactic structural characteristics of scientific documents. This enables us to use significantly less data for finetuning. We use only 1040 documents for finetuning. Our model augments mildly-trained SBERT-based contextual embeddings with pre-trained non-contextual Word2Vec embeddings to calculate semantic textual similarity. We demonstrate the performance of our model on the CLSciSumm shared tasks. It improves the state-of-the-art results by over 15% on the F1 score evaluation.