Harsh Chaudhari
2025
Measuring memorization in language models via probabilistic extraction
Jamie Hayes
|
Marika Swanberg
|
Harsh Chaudhari
|
Itay Yona
|
Ilia Shumailov
|
Milad Nasr
|
Christopher A. Choquette-Choo
|
Katherine Lee
|
A. Feder Cooper
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Large language models (LLMs) are susceptible to memorizing training data, raising concerns about the potential extraction of sensitive information at generation time. Discoverable extraction is the most common method for measuring this issue: split a training example into a prefix and suffix, then prompt the LLM with the prefix, and deem the example extractable if the LLM generates the matching suffix using greedy sampling. This definition yields a yes-or-no determination of whether extraction was successful with respect to a single query. Though efficient to compute, we show that this definition is unreliable because it does not account for non-determinism present in more realistic (non-greedy) sampling schemes, for which LLMs produce a range of outputs for the same prompt. We introduce probabilistic discoverable extraction, which, without additional cost, relaxes discoverable extraction by considering multiple queries to quantify the probability of extracting a target sequence. We evaluate our probabilistic measure across different models, sampling schemes, and training-data repetitions, and find that this measure provides more nuanced information about extraction risk compared to traditional discoverable extraction.
Search
Fix data
Co-authors
- Christopher A. Choquette-Choo 1
- A. Feder Cooper 1
- Jamie Hayes 1
- Katherine Lee 1
- Milad Nasr 1
- show all...