Estimating Privacy Leakage of Augmented Contextual Knowledge in Language Models

James Flemings, Bo Jiang, Wanrong Zhang, Zafar Takhirov, Murali Annavaram


Abstract
Language models (LMs) rely on their parametric knowledge augmented with relevant contextual knowledge for certain tasks, such as question answering. However, the contextual knowledge can contain private information that may be leaked when answering queries, and estimating this privacy leakage is not well understood. A straightforward approach of directly comparing an LM’s output to the contexts can overestimate the privacy risk, since the LM’s parametric knowledge might already contain the augmented contextual knowledge. To this end, we introduce context influence, a metric that builds on differential privacy, a widely-adopted privacy notion, to estimate the privacy leakage of contextual knowledge during decoding. Our approach effectively measures how each subset of the context influences an LM’s response while separating the specific parametric knowledge of the LM. Using our context influence metric, we demonstrate that context privacy leakage occurs when contextual knowledge is out of distribution with respect to parametric knowledge. Moreover, we experimentally demonstrate how context influence properly attributes the privacy leakage to augmented contexts, and we evaluate how factors– such as model size, context size, generation position, etc.– affect context privacy leakage. The practical implications of our results will inform practitioners of the privacy risk associated with augmented contextual knowledge.
Anthology ID:
2025.acl-long.1220
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
25092–25108
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1220/
DOI:
Bibkey:
Cite (ACL):
James Flemings, Bo Jiang, Wanrong Zhang, Zafar Takhirov, and Murali Annavaram. 2025. Estimating Privacy Leakage of Augmented Contextual Knowledge in Language Models. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 25092–25108, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Estimating Privacy Leakage of Augmented Contextual Knowledge in Language Models (Flemings et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1220.pdf