Gradient-based Analysis of NLP Models is Manipulable

Junlin Wang, Jens Tuyls, Eric Wallace, Sameer Singh


Abstract
Gradient-based analysis methods, such as saliency map visualizations and adversarial input perturbations, have found widespread use in interpreting neural NLP models due to their simplicity, flexibility, and most importantly, the fact that they directly reflect the model internals. In this paper, however, we demonstrate that the gradients of a model are easily manipulable, and thus bring into question the reliability of gradient-based analyses. In particular, we merge the layers of a target model with a Facade Model that overwhelms the gradients without affecting the predictions. This Facade Model can be trained to have gradients that are misleading and irrelevant to the task, such as focusing only on the stop words in the input. On a variety of NLP tasks (sentiment analysis, NLI, and QA), we show that the merged model effectively fools different analysis tools: saliency maps differ significantly from the original model’s, input reduction keeps more irrelevant input tokens, and adversarial perturbations identify unimportant tokens as being highly important.
Anthology ID:
2020.findings-emnlp.24
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
247–258
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.24
DOI:
10.18653/v1/2020.findings-emnlp.24
Bibkey:
Cite (ACL):
Junlin Wang, Jens Tuyls, Eric Wallace, and Sameer Singh. 2020. Gradient-based Analysis of NLP Models is Manipulable. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 247–258, Online. Association for Computational Linguistics.
Cite (Informal):
Gradient-based Analysis of NLP Models is Manipulable (Wang et al., Findings 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2020.findings-emnlp.24.pdf
Optional supplementary material:
 2020.findings-emnlp.24.OptionalSupplementaryMaterial.zip
Data
SNLISQuAD