Claim-Guided Textual Backdoor Attack for Practical Applications

Minkyoo Song, Hanna Kim, Jaehan Kim, Youngjin Jin, Seungwon Shin


Abstract
Recent advances in natural language processing and the increased use of large language models have exposed new security vulnerabilities, such as backdoor attacks. Previous backdoor attacks require input manipulation after model distribution to activate the backdoor, posing limitations in real-world applicability. Addressing this gap, we introduce a novel Claim-Guided Backdoor Attack (CGBA), which eliminates the need for such manipulations by utilizing inherent textual claims as triggers. CGBA leverages claim extraction, clustering, and targeted training to trick models to misbehave on targeted claims without affecting their performance on clean data. CGBA demonstrates its effectiveness and stealthiness across various datasets and models, significantly enhancing the feasibility of practical backdoor attacks. Our code and data will be available at https://github.com/minkyoo9/CGBA.
Anthology ID:
2025.findings-naacl.64
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1145–1159
Language:
URL:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.findings-naacl.64/
DOI:
Bibkey:
Cite (ACL):
Minkyoo Song, Hanna Kim, Jaehan Kim, Youngjin Jin, and Seungwon Shin. 2025. Claim-Guided Textual Backdoor Attack for Practical Applications. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 1145–1159, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Claim-Guided Textual Backdoor Attack for Practical Applications (Song et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/Ingest-2025-COMPUTEL/2025.findings-naacl.64.pdf