@inproceedings{he-etal-2021-model,
    title = "Model Extraction and Adversarial Transferability, Your {BERT} is Vulnerable!",
    author = "He, Xuanli  and
      Lyu, Lingjuan  and
      Xu, Qiongkai  and
      Sun, Lichao",
    editor = "Toutanova, Kristina  and
      Rumshisky, Anna  and
      Zettlemoyer, Luke  and
      Hakkani-Tur, Dilek  and
      Beltagy, Iz  and
      Bethard, Steven  and
      Cotterell, Ryan  and
      Chakraborty, Tanmoy  and
      Zhou, Yichao",
    booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
    month = jun,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2021.naacl-main.161/",
    doi = "10.18653/v1/2021.naacl-main.161",
    pages = "2006--2012",
    abstract = "Natural language processing (NLP) tasks, ranging from text classification to text generation, have been revolutionised by the pretrained language models, such as BERT. This allows corporations to easily build powerful APIs by encapsulating fine-tuned BERT models for downstream tasks. However, when a fine-tuned BERT model is deployed as a service, it may suffer from different attacks launched by the malicious users. In this work, we first present how an adversary can steal a BERT-based API service (the victim/target model) on multiple benchmark datasets with limited prior knowledge and queries. We further show that the extracted model can lead to highly transferable adversarial attacks against the victim model. Our studies indicate that the potential vulnerabilities of BERT-based API services still hold, even when there is an architectural mismatch between the victim model and the attack model. Finally, we investigate two defence strategies to protect the victim model, and find that unless the performance of the victim model is sacrificed, both model extraction and adversarial transferability can effectively compromise the target models."
}Markdown (Informal)
[Model Extraction and Adversarial Transferability, Your BERT is Vulnerable!](https://preview.aclanthology.org/ingest-emnlp/2021.naacl-main.161/) (He et al., NAACL 2021)
ACL