UTF: Under-trained Tokens as Fingerprints —— a Novel Approach to LLM Identification
Jiacheng Cai, Jiahao Yu, Yangguang Shao, Yuhang Wu, Xinyu Xing
Abstract
Fingerprinting large language models (LLMs) is essential for verifying model ownership, ensuring authenticity, and preventing misuse. Traditional fingerprinting methods often require significant computational overhead or white-box verification access. In this paper, we introduce UTF, a novel and efficient approach to fingerprinting LLMs by leveraging under-trained tokens. Under-trained tokens are tokens that the model has not fully learned during its training phase. By utilizing these tokens, we perform supervised fine-tuning to embed specific input-output pairs into the model. This process allows the LLM to produce predetermined outputs when presented with certain inputs, effectively embedding a unique fingerprint. Our method has minimal overhead and impact on model’s performance, and does not require white-box access to target model’s ownership identification. Compared to existing fingerprinting methods, UTF is also more effective and robust to fine-tuning and random guess.- Anthology ID:
- 2025.llmsec-1.1
- Volume:
- Proceedings of the The First Workshop on LLM Security (LLMSEC)
- Month:
- August
- Year:
- 2025
- Address:
- Vienna, Austria
- Editor:
- Jekaterina Novikova
- Venues:
- LLMSEC | WS
- SIG:
- SIGSEC
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1–6
- Language:
- URL:
- https://preview.aclanthology.org/corrections-2025-08/2025.llmsec-1.1/
- DOI:
- Cite (ACL):
- Jiacheng Cai, Jiahao Yu, Yangguang Shao, Yuhang Wu, and Xinyu Xing. 2025. UTF: Under-trained Tokens as Fingerprints —— a Novel Approach to LLM Identification. In Proceedings of the The First Workshop on LLM Security (LLMSEC), pages 1–6, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- UTF: Under-trained Tokens as Fingerprints —— a Novel Approach to LLM Identification (Cai et al., LLMSEC 2025)
- PDF:
- https://preview.aclanthology.org/corrections-2025-08/2025.llmsec-1.1.pdf