Locally Aggregated Feature Attribution on Natural Language Model Understanding

Sheng Zhang, Jin Wang, Haitao Jiang, Rui Song


Abstract
With the growing popularity of deep-learning models, model understanding becomes more important. Much effort has been devoted to demystify deep neural networks for better explainability. Some feature attribution methods have shown promising results in computer vision, especially the gradient-based methods where effectively smoothing the gradients with reference data is the key to a robust and faithful result. However, direct application of these gradient-based methods to NLP tasks is not trivial due to the fact that the input consists of discrete tokens and the “reference” tokens are not explicitly defined. In this work, we propose Locally Aggregated Feature Attribution (LAFA), a novel gradient-based feature attribution method for NLP models. Instead of relying on obscure reference tokens, it smooths gradients by aggregating similar reference texts derived from language model embeddings. For evaluation purpose, we also design experiments on different NLP tasks including Entity Recognition and Sentiment Analysis on public datasets and key words detection on constructed Amazon catalogue dataset. The superior performance of the proposed method is demonstrated through experiments.
Anthology ID:
2022.naacl-main.159
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2189–2201
Language:
URL:
https://preview.aclanthology.org/build-pipeline-with-new-library/2022.naacl-main.159/
DOI:
10.18653/v1/2022.naacl-main.159
Bibkey:
Cite (ACL):
Sheng Zhang, Jin Wang, Haitao Jiang, and Rui Song. 2022. Locally Aggregated Feature Attribution on Natural Language Model Understanding. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2189–2201, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Locally Aggregated Feature Attribution on Natural Language Model Understanding (Zhang et al., NAACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/build-pipeline-with-new-library/2022.naacl-main.159.pdf
Video:
 https://preview.aclanthology.org/build-pipeline-with-new-library/2022.naacl-main.159.mp4
Data
GLUESSTSST-2