SAFR: Neuron Redistribution for Interpretability

Ruidi Chang, Chunyuan Deng, Hanjie Chen


Abstract
Superposition refers to encoding representations of multiple features within a single neuron, which is common in deep neural networks. This property allows neurons to combine and represent multiple features, enabling the model to capture intricate information and handle complex tasks. Despite promising performance, the model’s interpretability has been diminished. This paper presents a novel approach to enhance model interpretability by regularizing feature superposition. We introduce SAFR, which simply applies regularizations to the loss function to promote monosemantic representations for important tokens while encouraging polysemanticity for correlated token pairs, where important tokens and correlated token pairs are identified via VMASK and attention weights respectively. We evaluate SAFR with a transformer model on two classification tasks. Experiments demonstrate the effectiveness of SAFR in improving model interpretability without compromising prediction performance. Besides, SAFR provides explanations by visualizing the neuron allocation within the intermediate layers.
Anthology ID:
2025.findings-naacl.112
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2117–2126
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.112/
DOI:
Bibkey:
Cite (ACL):
Ruidi Chang, Chunyuan Deng, and Hanjie Chen. 2025. SAFR: Neuron Redistribution for Interpretability. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 2117–2126, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
SAFR: Neuron Redistribution for Interpretability (Chang et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.112.pdf