Does Data Contamination Detection Work (Well) for LLMs? A Survey and Evaluation on Detection Assumptions

Yujuan Fu, Ozlem Uzuner, Meliha Yetisgen, Fei Xia


Abstract
Large language models (LLMs) have demonstrated great performance across various benchmarks, showing potential as general-purpose task solvers. However, as LLMs are typically trained on vast amounts of data, a significant concern in their evaluation is data contamination, where overlap between training data and evaluation datasets inflates performance assessments. Multiple approaches have been developed to identify data contamination. These approaches rely on specific assumptions that may not hold universally across different settings. To bridge this gap, we systematically review 50 papers on data contamination detection, categorize the underlying assumptions, and assess whether they have been rigorously validated. We identify and analyze eight categories of assumptions and test three of them as case studies. Our case studies focus on detecting direct, instance-level data contamination, which is also referred to as Membership Inference Attacks (MIA). Our analysis reveals that MIA approaches based on these three assumptions can have similar performance to random guessing, on datasets used in LLM pretraining, suggesting that current LLMs might learn data distributions rather than memorizing individual instances. Meanwhile, MIA can easily fail when there are data distribution shifts between the seen and unseen instances.
Anthology ID:
2025.findings-naacl.291
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5235–5256
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.291/
DOI:
Bibkey:
Cite (ACL):
Yujuan Fu, Ozlem Uzuner, Meliha Yetisgen, and Fei Xia. 2025. Does Data Contamination Detection Work (Well) for LLMs? A Survey and Evaluation on Detection Assumptions. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 5235–5256, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Does Data Contamination Detection Work (Well) for LLMs? A Survey and Evaluation on Detection Assumptions (Fu et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.291.pdf