Red Teaming Language Model Detectors with Language Models
Zhouxing Shi, Yihan Wang, Fan Yin, Xiangning Chen, Kai-Wei Chang, Cho-Jui Hsieh
Abstract
The prevalence and strong capability of large language models (LLMs) present significant safety and ethical risks if exploited by malicious users. To prevent the potentially deceptive usage of LLMs, recent work has proposed algorithms to detect LLM-generated text and protect LLMs. In this paper, we investigate the robustness and reliability of these LLM detectors under adversarial attacks. We study two types of attack strategies: 1) replacing certain words in an LLM’s output with their synonyms given the context; 2) automatically searching for an instructional prompt to alter the writing style of the generation. In both strategies, we leverage an auxiliary LLM to generate the word replacements or the instructional prompt. Different from previous works, we consider a challenging setting where the auxiliary LLM can also be protected by a detector. Experiments reveal that our attacks effectively compromise the performance of all detectors in the study with plausible generations, underscoring the urgent need to improve the robustness of LLM-generated text detection systems. Code is available at https://github.com/shizhouxing/LLM-Detector-Robustness.- Anthology ID:
- 2024.tacl-1.10
- Volume:
- Transactions of the Association for Computational Linguistics, Volume 12
- Month:
- Year:
- 2024
- Address:
- Cambridge, MA
- Venue:
- TACL
- SIG:
- Publisher:
- MIT Press
- Note:
- Pages:
- 174–189
- Language:
- URL:
- https://aclanthology.org/2024.tacl-1.10
- DOI:
- 10.1162/tacl_a_00639
- Cite (ACL):
- Zhouxing Shi, Yihan Wang, Fan Yin, Xiangning Chen, Kai-Wei Chang, and Cho-Jui Hsieh. 2024. Red Teaming Language Model Detectors with Language Models. Transactions of the Association for Computational Linguistics, 12:174–189.
- Cite (Informal):
- Red Teaming Language Model Detectors with Language Models (Shi et al., TACL 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2024.tacl-1.10.pdf