Hiba Ahsan


2025

pdf bib
Elucidating Mechanisms of Demographic Bias in LLMs for Healthcare
Hiba Ahsan | Arnab Sen Sharma | Silvio Amir | David Bau | Byron C Wallace
Findings of the Association for Computational Linguistics: EMNLP 2025

We know from prior work that LLMs encode social biases, and that this manifests in clinical tasks. In this work we adopt tools from mechanistic interpretability to unveil sociodemographic representations and biases within LLMs in the context of healthcare. Specifically, we ask: Can we identify activations within LLMs that encode sociodemographic information (e.g., gender, race)? We find that, in three open weight LLMs, gender information is highly localized in MLP layers and can be reliably manipulated at inference time via patching. Such interventions can surgically alter generated clinical vignettes for specific conditions, and also influence downstream clinical predictions which correlate with gender, e.g., patient risk of depression. We find that representation of patient race is somewhat more distributed, but can also be intervened upon, to a degree. To our knowledge, this is the first application of mechanistic interpretability methods to LLMs for healthcare.

2021

pdf bib
Multi-Modal Image Captioning for the Visually Impaired
Hiba Ahsan | Daivat Bhatt | Kaivan Shah | Nikita Bhalla
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop

One of the ways blind people understand their surroundings is by clicking images and relying on descriptions generated by image-captioning systems. Current work on captioning images for the visually impaired do not use the textual data present in the image when generating captions. This problem is critical as many visual scenes contain text, and 21% of the questions asked by blind people about the images they click pertain to the text present in them. In this work, we propose altering AoANet, a state-of-the-art image-captioning system, to leverage text detected in the image as an input feature. In addition, we use a pointer-generator network to copy detected text to the caption when tokens need to be reproduced accurately. Our model outperforms AoANet on the benchmark dataset VizWiz, giving a 35% and 16.2% performance improvement on CIDEr and SPICE scores, respectively.