Abstract
Pre-trained language models (PLMs) are shown to be vulnerable to minor word changes, which poses a significant threat to real-world systems. While previous studies directly focus on manipulating word inputs, they are limited by their means of generating adversarial samples, lacking generalization to versatile real-world attacks. This paper studies the basic structure of transformer-based PLMs, the self-attention (SA) mechanism. (1) We propose a powerful perturbation technique named ‘HackAttend,’ which perturbs the attention scores within the SA matrices via meticulously crafted attention masks. We show that state-of-the-art PLMs fall into heavy vulnerability, with minor attention perturbations (1%) resulting in a very high attack success rate (98%). Our paper extends the conventional text attack of word perturbations to more general structural perturbations. (2) We introduce ‘S-Attend,’ a novel smoothing technique that effectively makes SA robust via structural perturbations. We empirically demonstrate that this simple yet effective technique achieves robust performance on par with adversarial training when facing various text attackers.- Anthology ID:
- 2024.lrec-main.1496
- Volume:
- Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
- Month:
- May
- Year:
- 2024
- Address:
- Torino, Italia
- Editors:
- Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
- Venues:
- LREC | COLING
- SIG:
- Publisher:
- ELRA and ICCL
- Note:
- Pages:
- 17225–17236
- Language:
- URL:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2024.lrec-main.1496/
- DOI:
- Cite (ACL):
- Khai Jiet Liong, Hongqiu Wu, and Hai Zhao. 2024. Unveiling Vulnerability of Self-Attention. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 17225–17236, Torino, Italia. ELRA and ICCL.
- Cite (Informal):
- Unveiling Vulnerability of Self-Attention (Liong et al., LREC-COLING 2024)
- PDF:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2024.lrec-main.1496.pdf