Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models

Haritz Puerto, Martin Gubri, Sangdoo Yun, Seong Joon Oh


Abstract
Membership inference attacks (MIA) attempt to verify the membership of a given data sample in the training set for a model. MIA has become relevant in recent years, following the rapid development of large language models (LLM). Many are concerned about the usage of copyrighted materials for training them and call for methods for detecting such usage. However, recent research has largely concluded that current MIA methods do not work on LLMs. Even when they seem to work, it is usually because of the ill-designed experimental setup where other shortcut features enable “cheating.” In this work, we argue that MIA still works on LLMs, but only when multiple documents are presented for testing. We construct new benchmarks that measure the MIA performances at a continuous scale of data samples, from sentences (n-grams) to a collection of documents (multiple chunks of tokens). To validate the efficacy of current MIA approaches at greater scales, we adapt a recent work on Dataset Inference (DI) for the task of binary membership detection that aggregates paragraph-level MIA features to enable document- and dataset-level MIA. This baseline achieves the first successful MIA on pre-trained and fine-tuned LLMs.
Anthology ID:
2025.findings-naacl.234
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4165–4182
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.234/
DOI:
Bibkey:
Cite (ACL):
Haritz Puerto, Martin Gubri, Sangdoo Yun, and Seong Joon Oh. 2025. Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 4165–4182, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models (Puerto et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.234.pdf