Data-centric NLP Backdoor Defense from the Lens of Memorization
Zhenting Wang, Zhizhi Wang, Mingyu Jin, Mengnan Du, Juan Zhai, Shiqing Ma
Abstract
Backdoor attack is a severe threat to the trustworthiness of DNN-based language models. In this paper, we first extend the definition of memorization of language models from sample-wise to more fine-grained sentence element-wise (e.g., word, phrase, structure, and style), and then point out that language model backdoors are a type of element-wise memorization. Through further analysis, we find that the strength of such memorization is positively correlated to the frequency of duplicated elements in the training dataset. In conclusion, duplicated sentence elements are necessary for successful backdoor attacks. Based on this, we propose a data-centric defense. We first detect trigger candidates in training data by finding memorizable elements, i.e., duplicated elements, and then confirm real triggers by testing if the candidates can activate backdoor behaviors (i.e., malicious elements). Results show that our method outperforms state-of-the-art defenses in defending against different types of NLP backdoors.- Anthology ID:
- 2025.findings-naacl.316
- Volume:
- Findings of the Association for Computational Linguistics: NAACL 2025
- Month:
- April
- Year:
- 2025
- Address:
- Albuquerque, New Mexico
- Editors:
- Luis Chiruzzo, Alan Ritter, Lu Wang
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 5713–5731
- Language:
- URL:
- https://preview.aclanthology.org/Author-page-Marten-During-lu/2025.findings-naacl.316/
- DOI:
- Cite (ACL):
- Zhenting Wang, Zhizhi Wang, Mingyu Jin, Mengnan Du, Juan Zhai, and Shiqing Ma. 2025. Data-centric NLP Backdoor Defense from the Lens of Memorization. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 5713–5731, Albuquerque, New Mexico. Association for Computational Linguistics.
- Cite (Informal):
- Data-centric NLP Backdoor Defense from the Lens of Memorization (Wang et al., Findings 2025)
- PDF:
- https://preview.aclanthology.org/Author-page-Marten-During-lu/2025.findings-naacl.316.pdf