Vitaly Shmatikov
2024
Extracting Prompts by Inverting LLM Outputs
Collin Zhang
|
John Xavier Morris
|
Vitaly Shmatikov
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
We consider the problem of language model inversion: given outputs of a language model, we seek to extract the prompt that generated these outputs. We develop a new black-box method, output2prompt, that extracts prompts without access to the model’s logits and without adversarial or jailbreaking queries. Unlike previous methods, output2prompt only needs outputs of normal user queries. To improve memory efficiency, output2prompt employs a new sparse encoding techique. We measure the efficacy of output2prompt on a variety of user and system prompts and demonstrate zero-shot transferability across different LLMs.
2023
Text Embeddings Reveal (Almost) As Much As Text
John Morris
|
Volodymyr Kuleshov
|
Vitaly Shmatikov
|
Alexander Rush
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
How much private information do text embeddings reveal about the original text? We investigate the problem of embedding inversion, reconstructing the full text represented in dense text embeddings. We frame the problem as controlled generation: generating text that, when reembedded, is close to a fixed point in latent space. We find that although a naive model conditioned on the embedding performs poorly, a multi-step method that iteratively corrects and re-embeds text is able to recover 92% of 32-token text inputs exactly. We train our model to decode text embeddings from two state-of-the-art embedding models, and also show that our model can recover important personal information (full names) from a dataset of clinical notes.
2020
Adversarial Semantic Collisions
Congzheng Song
|
Alexander Rush
|
Vitaly Shmatikov
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
We study semantic collisions: texts that are semantically unrelated but judged as similar by NLP models. We develop gradient-based approaches for generating semantic collisions and demonstrate that state-of-the-art models for many tasks which rely on analyzing the meaning and similarity of texts—including paraphrase identification, document retrieval, response suggestion, and extractive summarization—are vulnerable to semantic collisions. For example, given a target query, inserting a crafted collision into an irrelevant document can shift its retrieval rank from 1000 to top 3. We show how to generate semantic collisions that evade perplexity-based filtering and discuss other potential mitigations. Our code is available at https://github.com/csong27/collision-bert.
Search
Co-authors
- Alexander M. Rush 2
- Collin Zhang 1
- Congzheng Song 1
- John Morris 1
- John Xavier Morris 1
- show all...