Neuron-level Interpretation of Deep NLP Models: A Survey

Hassan Sajjad, Nadir Durrani, Fahim Dalvi


Abstract
The proliferation of Deep Neural Networks in various domains has seen an increased need for interpretability of these models. Preliminary work done along this line, and papers that surveyed such, are focused on high-level representation analysis. However, a recent branch of work has concentrated on interpretability at a more granular level of analyzing neurons within these models. In this paper, we survey the work done on neuron analysis including: i) methods to discover and understand neurons in a network; ii) evaluation methods; iii) major findings including cross architectural comparisons that neuron analysis has unraveled; iv) applications of neuron probing such as: controlling the model, domain adaptation, and so forth; and v) a discussion on open issues and future research directions.
Anthology ID:
2022.tacl-1.74
Volume:
Transactions of the Association for Computational Linguistics, Volume 10
Month:
Year:
2022
Address:
Cambridge, MA
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
1285–1303
Language:
URL:
https://aclanthology.org/2022.tacl-1.74
DOI:
10.1162/tacl_a_00519
Bibkey:
Cite (ACL):
Hassan Sajjad, Nadir Durrani, and Fahim Dalvi. 2022. Neuron-level Interpretation of Deep NLP Models: A Survey. Transactions of the Association for Computational Linguistics, 10:1285–1303.
Cite (Informal):
Neuron-level Interpretation of Deep NLP Models: A Survey (Sajjad et al., TACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2022.tacl-1.74.pdf