Robust Data Watermarking in Language Models by Injecting Fictitious Knowledge

Xinyue Cui, Johnny Wei, Swabha Swayamdipta, Robin Jia


Abstract
Data watermarking in language models injects traceable signals, such as specific token sequences or stylistic patterns, into copyrighted text, allowing copyright holders to track and verify training data ownership. Previous data watermarking techniques primarily focus on effective memorization after pretraining, while overlooking challenges that arise in other stages of the LLM pipeline, such as the risk of watermark filtering during data preprocessing, or potential forgetting through post-training, or verification difficulties due to API-only access. We propose a novel data watermarking approach that injects coherent and plausible yet fictitious knowledge into training data using generated passages describing a fictitious entity and its associated attributes. Our watermarks are designed to be memorized by the LLM through seamlessly integrating in its training data, making them harder to detect lexically during preprocessing. We demonstrate that our watermarks can be effectively memorized by LLMs, and that increasing our watermarks’ density, length, and diversity of attributes strengthens their memorization. We further show that our watermarks remain robust throughout LLM development, maintaining their effectiveness after continual pretraining and supervised finetuning. Finally, we show that our data watermarks can be evaluated even under API-only access via question answering.
Anthology ID:
2025.findings-acl.736
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14292–14306
Language:
URL:
https://preview.aclanthology.org/corrections-2025-08/2025.findings-acl.736/
DOI:
10.18653/v1/2025.findings-acl.736
Bibkey:
Cite (ACL):
Xinyue Cui, Johnny Wei, Swabha Swayamdipta, and Robin Jia. 2025. Robust Data Watermarking in Language Models by Injecting Fictitious Knowledge. In Findings of the Association for Computational Linguistics: ACL 2025, pages 14292–14306, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Robust Data Watermarking in Language Models by Injecting Fictitious Knowledge (Cui et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/corrections-2025-08/2025.findings-acl.736.pdf