Ching-Yu Tsai
2025
Neuron-Level Differentiation of Memorization and Generalization in Large Language Models
Ko-Wei Huang
|
Yi-Fu Fu
|
Ching-Yu Tsai
|
Yu-Chieh Tu
|
Tzu-ling Cheng
|
Cheng-Yu Lin
|
Yi-Ting Yang
|
Heng-Yi Liu
|
Keng-Te Liao
|
Da-Cheng Juan
|
Shou-De Lin
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
We investigate how Large Language Models (LLMs) distinguish between memorization and generalization at the neuron level. Through carefully designed tasks, we identify distinct neuron subsets responsible for each behavior. Experiments on both a GPT-2 model trained from scratch and a pretrained LLaMA-3.2 model fine-tuned with LoRA show consistent neuron-level specialization. We further demonstrate that inference-time interventions on these neurons can steer the model’s behavior toward memorization or generalization. To assess robustness, we evaluate intra-task and inter-task consistency, confirming that these neuron-behavior associations reflect generalizable patterns rather than dataset-specific artifacts. Our findings reveal modular structure in LLMs and enable controlling memorization and generalization behaviors at inference time.
Search
Fix author
Co-authors
- Tzu-ling Cheng 1
- Yi-Fu Fu 1
- Ko-Wei Huang 1
- Da-Cheng Juan 1
- Keng-Te Liao 1
- show all...