Identifying Pre-training Data in LLMs: A Neuron Activation-Based Detection Framework

Hongyi Tang, Zhihao Zhu, Yi Yang


Abstract
The performance of large language models (LLMs) is closely tied to their training data, which can include copyrighted material or private information, raising legal and ethical concerns. Additionally, LLMs face criticism for dataset contamination and internalizing biases. To address these issues, the Pre-Training Data Detection (PDD) task was proposed to identify if specific data was included in an LLM’s pre-training corpus. However, existing PDD methods often rely on superficial features like prediction confidence and loss, resulting in mediocre performance. To improve this, we introduce NA-PDD, a novel algorithm analyzing differential neuron activation patterns between training and non-training data in LLMs. This is based on the observation that these data types activate different neurons during LLM inference. We also introduce CCNewsPDD, a temporally unbiased benchmark employing rigorous data transformations to ensure consistent time distributions between training and non-training data. Our experiments demonstrate that NA-PDD significantly outperforms existing methods across three benchmarks and multiple LLMs.
Anthology ID:
2025.emnlp-main.946
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18738–18751
Language:
URL:
https://preview.aclanthology.org/ingest-luhme/2025.emnlp-main.946/
DOI:
10.18653/v1/2025.emnlp-main.946
Bibkey:
Cite (ACL):
Hongyi Tang, Zhihao Zhu, and Yi Yang. 2025. Identifying Pre-training Data in LLMs: A Neuron Activation-Based Detection Framework. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 18738–18751, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Identifying Pre-training Data in LLMs: A Neuron Activation-Based Detection Framework (Tang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-luhme/2025.emnlp-main.946.pdf
Checklist:
 2025.emnlp-main.946.checklist.pdf