Lucas Hurley McCabe
2025
Demystifying optimized prompts in language models
Rimon Melamed
|
Lucas Hurley McCabe
|
H Howie Huang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Modern language models (LMs) are not robust to out-of-distribution inputs. Machine generated (“optimized”) prompts can be used to modulate LM outputs and induce specific behaviors while appearing completely uninterpretable. In this work, we investigate the composition of optimized prompts, as well as the mechanisms by which LMs parse and build predictions from optimized prompts. We find that optimized prompts primarily consist of punctuation and noun tokens which are more rare in the training data. Internally, optimized prompts are clearly distinguishable from natural language counterparts based on sparse subsets of the model’s activations. Across various families of instruction-tuned models, optimized prompts follow a similar path in how their representations form through the network.
2024
Prompts have evil twins
Rimon Melamed
|
Lucas Hurley McCabe
|
Tanay Wakhare
|
Yejin Kim
|
H. Howie Huang
|
Enric Boix-Adserà
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
We discover that many natural-language prompts can be replaced by corresponding prompts that are unintelligible to humans but that provably elicit similar behavior in language models. We call these prompts “evil twins” because they are obfuscated and uninterpretable (evil), but at the same time mimic the functionality of the original natural-language prompts (twins). Remarkably, evil twins transfer between models. We find these prompts by solving a maximum-likelihood problem which has applications of independent interest.