@inproceedings{cui-etal-2025-l2m2-robust,
    title = "Robust Data Watermarking in Language Models by Injecting Fictitious Knowledge",
    author = "Cui, Xinyue  and
      Wei, Johnny  and
      Swayamdipta, Swabha  and
      Jia, Robin",
    editor = "Jia, Robin  and
      Wallace, Eric  and
      Huang, Yangsibo  and
      Pimentel, Tiago  and
      Maini, Pratyush  and
      Dankers, Verna  and
      Wei, Johnny  and
      Lesci, Pietro",
    booktitle = "Proceedings of the First Workshop on Large Language Model Memorization (L2M2)",
    month = aug,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.l2m2-1.15/",
    doi = "10.18653/v1/2025.l2m2-1.15",
    pages = "190--204",
    ISBN = "979-8-89176-278-7",
    abstract = "Data watermarking in language models injects traceable signals, such as specific token sequences or stylistic patterns, into copyrighted text, allowing copyright holders to track and verify training data ownership. Previous data watermarking techniques primarily focus on effective memorization during pretraining, while overlooking challenges that arise in other stages of the LLM lifecycle, such as the risk of watermark filtering during data preprocessing and verification difficulties due to API-only access. To address these challenges, we propose a novel data watermarking approach that injects plausible yet fictitious knowledge into training data using generated passages describing a fictitious entity and its associated attributes. Our watermarks are designed to be memorized by the LLM through seamlessly integrating in its training data, making them harder to detect lexically during preprocessing. We demonstrate that our watermarks can be effectively memorized by LLMs, and that increasing our watermarks' density, length, and diversity of attributes strengthens their memorization. We further show that our watermarks remain effective after continual pretraining and supervised finetuning. Finally, we show that our data watermarks can be evaluated even under API-only access via question answering."
}Markdown (Informal)
[Robust Data Watermarking in Language Models by Injecting Fictitious Knowledge](https://preview.aclanthology.org/ingest-emnlp/2025.l2m2-1.15/) (Cui et al., L2M2 2025)
ACL