Xiangning Chen
2024
Red Teaming Language Model Detectors with Language Models
Zhouxing Shi
|
Yihan Wang
|
Fan Yin
|
Xiangning Chen
|
Kai-Wei Chang
|
Cho-Jui Hsieh
Transactions of the Association for Computational Linguistics, Volume 12
The prevalence and strong capability of large language models (LLMs) present significant safety and ethical risks if exploited by malicious users. To prevent the potentially deceptive usage of LLMs, recent work has proposed algorithms to detect LLM-generated text and protect LLMs. In this paper, we investigate the robustness and reliability of these LLM detectors under adversarial attacks. We study two types of attack strategies: 1) replacing certain words in an LLM’s output with their synonyms given the context; 2) automatically searching for an instructional prompt to alter the writing style of the generation. In both strategies, we leverage an auxiliary LLM to generate the word replacements or the instructional prompt. Different from previous works, we consider a challenging setting where the auxiliary LLM can also be protected by a detector. Experiments reveal that our attacks effectively compromise the performance of all detectors in the study with plausible generations, underscoring the urgent need to improve the robustness of LLM-generated text detection systems. Code is available at https://github.com/shizhouxing/LLM-Detector-Robustness.
2023
Symbol tuning improves in-context learning in language models
Jerry Wei
|
Le Hou
|
Andrew Lampinen
|
Xiangning Chen
|
Da Huang
|
Yi Tay
|
Xinyun Chen
|
Yifeng Lu
|
Denny Zhou
|
Tengyu Ma
|
Quoc Le
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
We present symbol tuning - finetuning language models on in-context input-label pairs where natural language labels (e.g., “positive/negative sentiment”) are replaced with arbitrary symbols (e.g., “foo/bar”). Symbol tuning leverages the intuition that when a model cannot use instructions or natural language labels to figure out a task, it must instead do so by learning the input-label mappings. We experiment with symbol tuning across PaLM models up to 540B parameters and observe benefits across various settings. First, symbol tuning boosts performance on unseen in-context learning tasks and is much more robust to underspecified prompts, such as those without instructions or without natural language labels. Second, symbol-tuned models are much stronger at algorithmic reasoning tasks, with up to 18.2% better performance on the List Functions benchmark and up to 15.3% better performance on the Simple Turing Concepts benchmark. Finally, symbol-tuned models show large improvements in following flipped-labels presented in-context, meaning that they are more capable of using in-context information to override prior knowledge.
Search
Co-authors
- Zhouxing Shi 1
- Yihan Wang 1
- Fan Yin 1
- Kai-Wei Chang 1
- Cho-Jui Hsieh 1
- show all...