Efficient but Vulnerable: Benchmarking and Defending LLM Batch Prompting Attack

Murong Yue, Ziyu Yao


Abstract
Batch prompting, which combines a batch of multiple queries sharing the same context in one inference, has emerged as a promising solution to reduce inference costs. However, our study reveals a significant security vulnerability in batch prompting: malicious users can inject attack instructions into a batch, leading to unwanted interference across all queries, which can result in the inclusion of harmful content, such as phishing links, or the disruption of logical reasoning. In this paper, we construct BatchSafeBench, a comprehensive benchmark comprising 150 attack instructions of two types and 8k batch instances, to study the batch prompting vulnerability systematically. Our evaluation of both closed-source and open-weight LLMs demonstrates that all LLMs are susceptible to batch prompting attacks. We then explore multiple defending approaches. While the prompting-based defense shows limited effectiveness for smaller LLMs, the probing-based approach achieves about 95% accuracy in detecting attacks. Additionally, we perform a mechanistic analysis to understand the attack and identify attention heads that are responsible for it.
Anthology ID:
2025.findings-acl.245
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4746–4761
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.245/
DOI:
Bibkey:
Cite (ACL):
Murong Yue and Ziyu Yao. 2025. Efficient but Vulnerable: Benchmarking and Defending LLM Batch Prompting Attack. In Findings of the Association for Computational Linguistics: ACL 2025, pages 4746–4761, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Efficient but Vulnerable: Benchmarking and Defending LLM Batch Prompting Attack (Yue & Yao, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.245.pdf