@inproceedings{mao-etal-2025-watermarking,
    title = "Watermarking Large Language Models: An Unbiased and Low-risk Method",
    author = "Mao, Minjia  and
      Wei, Dongjun  and
      Chen, Zeyu  and
      Fang, Xiao  and
      Chau, Michael",
    editor = "Che, Wanxiang  and
      Nabende, Joyce  and
      Shutova, Ekaterina  and
      Pilehvar, Mohammad Taher",
    booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.acl-long.391/",
    doi = "10.18653/v1/2025.acl-long.391",
    pages = "7939--7960",
    ISBN = "979-8-89176-251-0",
    abstract = "Recent advancements in large language models (LLMs) have highlighted the risk of misusing them, raising the need for accurate detection of LLM-generated content. In response, a viable solution is to inject imperceptible identifiers into LLMs, known as watermarks. Our research extends the existing watermarking methods by proposing the novel Sampling One Then Accepting (STA-1) method. STA-1 is an unbiased watermark that preserves the original token distribution in expectation and has a lower risk of producing unsatisfactory outputs in low-entropy scenarios compared to existing unbiased watermarks. In watermark detection, STA-1 does not require prompts or a white-box LLM, provides statistical guarantees, demonstrates high efficiency in detection time, and remains robust against various watermarking attacks. Experimental results on low-entropy and high-entropy datasets demonstrate that STA-1 achieves the above properties simultaneously, making it a desirable solution for watermarking LLMs. Implementation codes for this study are available online."
}Markdown (Informal)
[Watermarking Large Language Models: An Unbiased and Low-risk Method](https://preview.aclanthology.org/ingest-emnlp/2025.acl-long.391/) (Mao et al., ACL 2025)
ACL