White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks

Yotam Gil, Yoav Chai, Or Gorodissky, Jonathan Berant


Abstract
Adversarial examples are important for understanding the behavior of neural models, and can improve their robustness through adversarial training. Recent work in natural language processing generated adversarial examples by assuming white-box access to the attacked model, and optimizing the input directly against it (Ebrahimi et al., 2018). In this work, we show that the knowledge implicit in the optimization procedure can be distilled into another more efficient neural network. We train a model to emulate the behavior of a white-box attack and show that it generalizes well across examples. Moreover, it reduces adversarial example generation time by 19x-39x. We also show that our approach transfers to a black-box setting, by attacking The Google Perspective API and exposing its vulnerability. Our attack flips the API-predicted label in 42% of the generated examples, while humans maintain high-accuracy in predicting the gold label.
Anthology ID:
N19-1139
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Jill Burstein, Christy Doran, Thamar Solorio
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1373–1379
Language:
URL:
https://aclanthology.org/N19-1139
DOI:
10.18653/v1/N19-1139
Bibkey:
Cite (ACL):
Yotam Gil, Yoav Chai, Or Gorodissky, and Jonathan Berant. 2019. White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1373–1379, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks (Gil et al., NAACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/autopr/N19-1139.pdf
Code
 orgoro/white-2-black