Bypassing LLM Watermarks with Color-Aware Substitutions

Qilong Wu, Varun Chandrasekaran


Abstract
Watermarking approaches are proposed to identify if text being circulated is human- or large language model- (LLM) generated. The state-of-the-art watermarking strategy of Kirchenbauer et al. (2023a) biases the LLM to generate specific (“green”) tokens. However, determining the robustness of this watermarking method under finite (low) edit budgets is an open problem. Additionally, existing attack methods failto evade detection for longer text segments. We overcome these limitations, and propose Self Color Testing-based Substitution (SCTS), thefirst “color-aware” attack. SCTS obtains color information by strategically prompting the watermarked LLM and comparing output tokensfrequencies. It uses this information to determine token colors, and substitutes green tokens with non-green ones. In our experiments, SCTS successfully evades watermark detection using fewer number of edits than related work. Additionally, we show both theoretically and empirically that SCTS can remove the watermark for arbitrarily long watermarked text.
Anthology ID:
2024.acl-long.464
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8549–8581
Language:
URL:
https://aclanthology.org/2024.acl-long.464
DOI:
10.18653/v1/2024.acl-long.464
Bibkey:
Cite (ACL):
Qilong Wu and Varun Chandrasekaran. 2024. Bypassing LLM Watermarks with Color-Aware Substitutions. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8549–8581, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Bypassing LLM Watermarks with Color-Aware Substitutions (Wu & Chandrasekaran, ACL 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/autopr/2024.acl-long.464.pdf