SATA: A Paradigm for LLM Jailbreak via Simple Assistive Task Linkage

Xiaoning Dong, Wenbo Hu, Wei Xu, Tianxing He


Abstract
Large language models (LLMs) have made significant advancements across various tasks, but their safety alignment remains a major concern. Exploring jailbreak prompts can expose LLMs’ vulnerabilities and guide efforts to secure them. Existing methods primarily design sophisticated instructions for the LLM to follow, or rely on multiple iterations, which could hinder the performance and efficiency of jailbreaks. In this work, we propose a novel jailbreak paradigm, Simple Assistive Task Linkage (SATA), which can effectively circumvent LLM safeguards and elicit harmful responses. Specifically, SATA first masks harmful keywords within a malicious query to generate a relatively benign query containing one or multiple [MASK] special tokens. It then employs a simple assistive task—such as a masked language model task or an element lookup by position task—to encode the semantics of the masked keywords. Finally, SATA links the assistive task with the masked query to jointly perform the jailbreak. Extensive experiments show that SATA achieves state-of-the-art performance and outperforms baselines by a large margin. Specifically, on AdvBench dataset, with mask language model (MLM) assistive task, SATA achieves an overall attack success rate (ASR) of 85% and harmful score (HS) of 4.57, and with element lookup by position (ELP) assistive task, SATA attains an overall ASR of 76% and HS of 4.43.
Anthology ID:
2025.findings-acl.100
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1952–1987
Language:
URL:
https://preview.aclanthology.org/transition-to-people-yaml/2025.findings-acl.100/
DOI:
10.18653/v1/2025.findings-acl.100
Bibkey:
Cite (ACL):
Xiaoning Dong, Wenbo Hu, Wei Xu, and Tianxing He. 2025. SATA: A Paradigm for LLM Jailbreak via Simple Assistive Task Linkage. In Findings of the Association for Computational Linguistics: ACL 2025, pages 1952–1987, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
SATA: A Paradigm for LLM Jailbreak via Simple Assistive Task Linkage (Dong et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/transition-to-people-yaml/2025.findings-acl.100.pdf