Chengshuai Zhao
2024
Glue pizza and eat rocks - Exploiting Vulnerabilities in Retrieval-Augmented Generative Models
Zhen Tan
|
Chengshuai Zhao
|
Raha Moraffah
|
Yifan Li
|
Song Wang
|
Jundong Li
|
Tianlong Chen
|
Huan Liu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Retrieval-Augmented Generative (RAG) models enhance Large Language Models (LLMs) by integrating external knowledge bases, improving their performance in applications like fact-checking and information searching. In this paper, we demonstrate a security threat where adversaries can exploit the openness of these knowledge bases by injecting deceptive content into the retrieval database, intentionally changing the model’s behavior. This threat is critical as it mirrors real-world usage scenarios where RAG systems interact with publicly accessible knowledge bases, such as web scrapings and user-contributed data pools. To be more realistic, we target a realistic setting where the adversary has no knowledge of users’ queries, knowledge base data, and the LLM parameters. We demonstrate that it is possible to exploit the model successfully through crafted content uploads with access to the retriever. Our findings emphasize an urgent need for security measures in the design and deployment of RAG systems to prevent potential manipulation and ensure the integrity of machine-generated content.
Search
Co-authors
- Huan Liu 1
- Jundong Li 1
- Raha Moraffah 1
- Song Wang 1
- Tianlong Chen 1
- show all...