Named Entity Inference Attacks on Clinical LLMs: Exploring Privacy Risks and the Impact of Mitigation Strategies

Adam Sutton, Xi Bai, Kawsar Noor, Thomas Searle, Richard Dobson


Abstract
Transformer-based Large Language Models (LLMs) have achieved remarkable success across various domains, including clinical language processing, where they enable state-of-the-art performance in numerous tasks. Like all deep learning models, LLMs are susceptible to inference attacks that exploit sensitive attributes seen during training. AnonCAT, a RoBERTa-based masked language model, has been fine-tuned to de-identify sensitive clinical textual data. The community has a responsibility to explore the privacy risks of these models. This work proposes an attack method to infer sensitive named entities used in the training of AnonCAT models. We perform three experiments; the privacy implications of generating multiple names, the impact of white-box and black-box on attack inference performance, and the privacy-enhancing effects of Differential Privacy (DP) when applied to AnonCAT. By providing real textual predictions and privacy leakage metrics, this research contributes to understanding and mitigating the potential risks associated with exposing LLMs in sensitive domains like healthcare.
Anthology ID:
2025.privatenlp-main.4
Volume:
Proceedings of the Sixth Workshop on Privacy in Natural Language Processing
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Ivan Habernal, Sepideh Ghanavati, Vijayanta Jain, Timour Igamberdiev, Shomir Wilson
Venues:
PrivateNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
42–52
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.privatenlp-main.4/
DOI:
Bibkey:
Cite (ACL):
Adam Sutton, Xi Bai, Kawsar Noor, Thomas Searle, and Richard Dobson. 2025. Named Entity Inference Attacks on Clinical LLMs: Exploring Privacy Risks and the Impact of Mitigation Strategies. In Proceedings of the Sixth Workshop on Privacy in Natural Language Processing, pages 42–52, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Named Entity Inference Attacks on Clinical LLMs: Exploring Privacy Risks and the Impact of Mitigation Strategies (Sutton et al., PrivateNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.privatenlp-main.4.pdf