Zhihao Zhu
2025
Identifying Pre-training Data in LLMs: A Neuron Activation-Based Detection Framework
Hongyi Tang
|
Zhihao Zhu
|
Yi Yang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
The performance of large language models (LLMs) is closely tied to their training data, which can include copyrighted material or private information, raising legal and ethical concerns. Additionally, LLMs face criticism for dataset contamination and internalizing biases. To address these issues, the Pre-Training Data Detection (PDD) task was proposed to identify if specific data was included in an LLM’s pre-training corpus. However, existing PDD methods often rely on superficial features like prediction confidence and loss, resulting in mediocre performance. To improve this, we introduce NA-PDD, a novel algorithm analyzing differential neuron activation patterns between training and non-training data in LLMs. This is based on the observation that these data types activate different neurons during LLM inference. We also introduce CCNewsPDD, a temporally unbiased benchmark employing rigorous data transformations to ensure consistent time distributions between training and non-training data. Our experiments demonstrate that NA-PDD significantly outperforms existing methods across three benchmarks and multiple LLMs.
Forest for the Trees: Overarching Prompting Evokes High-Level Reasoning in Large Language Models
Haoran Liao
|
Shaohua Hu
|
Zhihao Zhu
|
Hao He
|
Yaohui Jin
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Chain-of-thought (CoT) and subsequent methods adopted a deductive paradigm that decomposes the reasoning process, demonstrating remarkable performances across NLP tasks. However, such a paradigm faces the challenge of getting bogged down in low-level semantic details, hindering large language models (LLMs) from correctly understanding, selecting, and compositing conditions. In this work, we present Overarching Prompting (OaP), a simple prompting method that elicits the high-level thinking of LLMs. Specifically, OaP first abstracts the whole problem into a simplified archetype and formulates strategies grounded in concepts and principles, establishing an overarching perspective for guiding reasoning. We conducted experiments with SoTA models, including ChatGPT, InstructGPT, and Llama3-70B-instruct, and received promising performances across tasks including Knowledge QA, Mathematical, and Open-Domain Reasoning. For instance, OaP improved ChatGPT and CoT by 19.0% and 3.1% on MMLU’s College Physics, 8.8% and 2.3% on GSM8k, and 10.3% and 2.5% on StrategyQA, respectively.
Search
Fix author
Co-authors
- Hao He 1
- Shaohua Hu 1
- Yaohui Jin 1
- Haoran Liao 1
- Hongyi Tang 1
- show all...
- Yi Yang 1