Rethinking Backdoor Detection Evaluation for Language Models

Jun Yan, Wenjie Jacky Mo, Xiang Ren, Robin Jia


Abstract
Backdoor attacks, in which a model behaves maliciously when given an attacker-specified trigger, pose a major security risk for practitioners who depend on publicly released language models. As a countermeasure, backdoor detection methods aim to detect whether a released model contains a backdoor. While existing backdoor detection methods have high accuracy in detecting backdoored models on standard benchmarks, it is unclear whether they can robustly identify backdoors in the wild. In this paper, we examine the robustness of backdoor detectors by manipulating different factors during backdoor planting. We find that the success of existing methods based on trigger inversion or meta classifiers highly depends on how intensely the model is trained on poisoned data. Specifically, backdoors planted with more aggressive or more conservative training are significantly more difficult to detect than the default ones. Our results highlight a lack of robustness of existing backdoor detectors and the limitations in current benchmark construction.
Anthology ID:
2025.emnlp-main.318
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6239–6250
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.318/
DOI:
Bibkey:
Cite (ACL):
Jun Yan, Wenjie Jacky Mo, Xiang Ren, and Robin Jia. 2025. Rethinking Backdoor Detection Evaluation for Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 6239–6250, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Rethinking Backdoor Detection Evaluation for Language Models (Yan et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.318.pdf
Checklist:
 2025.emnlp-main.318.checklist.pdf