Dynamic Transformers Provide a False Sense of Efficiency

Yiming Chen, Simin Chen, Zexin Li, Wei Yang, Cong Liu, Robby Tan, Haizhou Li


Abstract
Despite much success in natural language processing (NLP), pre-trained language models typically lead to a high computational cost during inference. Multi-exit is a mainstream approach to address this issue by making a trade-off between efficiency and accuracy, where the saving of computation comes from an early exit. However, whether such saving from early-exiting is robust remains unknown. Motivated by this, we first show that directly adapting existing adversarial attack approaches targeting model accuracy cannot significantly reduce inference efficiency. To this end, we propose a simple yet effective attacking framework, SAME, a novel slowdown attack framework on multi-exit models, which is specially tailored to reduce the efficiency of the multi-exit models. By leveraging the multi-exit models’ design characteristics, we utilize all internal predictions to guide the adversarial sample generation instead of merely considering the final prediction. Experiments on the GLUE benchmark show that SAME can effectively diminish the efficiency gain of various multi-exit models by 80% on average, convincingly validating its effectiveness and generalization ability.
Anthology ID:
2023.acl-long.395
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7164–7180
Language:
URL:
https://aclanthology.org/2023.acl-long.395
DOI:
10.18653/v1/2023.acl-long.395
Bibkey:
Cite (ACL):
Yiming Chen, Simin Chen, Zexin Li, Wei Yang, Cong Liu, Robby Tan, and Haizhou Li. 2023. Dynamic Transformers Provide a False Sense of Efficiency. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7164–7180, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Dynamic Transformers Provide a False Sense of Efficiency (Chen et al., ACL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2023.acl-long.395.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-2/2023.acl-long.395.mp4