Demin Song
2021
Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning
Linyang Li
|
Demin Song
|
Xiaonan Li
|
Jiehang Zeng
|
Ruotian Ma
|
Xipeng Qiu
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Pre-Trained Models have been widely applied and recently proved vulnerable under backdoor attacks: the released pre-trained weights can be maliciously poisoned with certain triggers. When the triggers are activated, even the fine-tuned model will predict pre-defined labels, causing a security threat. These backdoors generated by the poisoning methods can be erased by changing hyper-parameters during fine-tuning or detected by finding the triggers. In this paper, we propose a stronger weight-poisoning attack method that introduces a layerwise weight poisoning strategy to plant deeper backdoors; we also introduce a combinatorial trigger that cannot be easily detected. The experiments on text classification tasks show that previous defense methods cannot resist our weight-poisoning method, which indicates that our method can be widely applied and may provide hints for future model robustness studies.
Search