Xiaoyuan Fu


2025

pdf bib
The Threat of PROMPTS in Large Language Models: A System and User Prompt Perspective
Zixuan Xia | Haifeng Sun | Jingyu Wang | Qi Qi | Huazheng Wang | Xiaoyuan Fu | Jianxin Liao
Findings of the Association for Computational Linguistics: ACL 2025

Prompts, especially high-quality ones, play an invaluable role in assisting large language models (LLMs) to accomplish various natural language processing tasks. However, carefully crafted prompts can also manipulate model behavior. Therefore, the security risks that “prompts themselves face” and those “arising from harmful prompts” cannot be overlooked and we define the Prompt Threat (PT) issues. In this paper, we review the latest attack methods related to prompt threats, focusing on prompt leakage attacks and prompt jailbreak attacks. Additionally, we summarize the experimental setups of these methods and explore the relationship between prompt threats and prompt injection attacks.