@inproceedings{cui-etal-2026-vortexpia,
title = "{V}ortex{PIA}: Indirect Prompt Injection Attack against {LLM}s for Efficient Extraction of User Privacy",
author = "Cui, Yu and
Pan, Sicheng and
Liu, Yifei and
Zhang, Haibin and
Zuo, Cong",
editor = "Demberg, Vera and
Inui, Kentaro and
Marquez, Llu{\'i}s",
booktitle = "Findings of the {A}ssociation for {C}omputational {L}inguistics: {EACL} 2026",
month = mar,
year = "2026",
address = "Rabat, Morocco",
publisher = "Association for Computational Linguistics",
url = "https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.29/",
pages = "587--609",
ISBN = "979-8-89176-386-9",
abstract = "Large language models (LLMs) have been widely deployed in Conversational AIs (CAIs), while exposing privacy and security threats. Recent research shows that LLM-based CAIs can be manipulated to extract private information from human users, posing serious security threats. However, the methods proposed in that study rely on a white-box setting that adversaries can directly modify the system prompt. This condition is unlikely to hold in real-world deployments. The limitation raises a critical question: can unprivileged attackers still induce such privacy risks in practical LLM-integrated applications? To address this question, we propose VortexPIA, a novel indirect prompt injection attack that induces privacy extraction in LLM-integrated applications under black-box settings. By injecting token-efficient data containing false memories, VortexPIA misleads LLMs to actively request private information in batches. Unlike prior methods, VortexPIA allows attackers to flexibly define multiple categories of sensitive data. We evaluate VortexPIA on six LLMs, covering both traditional and reasoning LLMs, across four benchmark datasets. The results show that VortexPIA significantly outperforms baselines and achieves state-of-the-art (SOTA) performance. It also demonstrates efficient privacy requests, reduced token consumption, and enhanced robustness against defense mechanisms. We further validate VortexPIA on multiple realistic open-source LLM-integrated applications, demonstrating its practical effectiveness. Our code is available at https://github.com/cuiyu-ai/VortexPIA."
}Markdown (Informal)
[VortexPIA: Indirect Prompt Injection Attack against LLMs for Efficient Extraction of User Privacy](https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.29/) (Cui et al., Findings 2026)
ACL