TempParaphraser: “Heating Up” Text to Evade AI-Text Detection through Paraphrasing

Junjie Huang, Ruiquan Zhang, Jinsong Su, Yidong Chen


Abstract
The widespread adoption of large language models (LLMs) has increased the need for reliable AI-text detection. While current detectors perform well on benchmark datasets, we highlight a critical vulnerability: increasing the temperature parameter during inference significantly reduces detection accuracy. Based on this weakness, we propose TempParaphraser, a simple yet effective paraphrasing framework that simulates high-temperature sampling effects through multiple normal-temperature generations, effectively evading detection. Experiments show that TempParaphraser reduces detector accuracy by an average of 82.5% while preserving high text quality. We also demonstrate that training on TempParaphraser-augmented data improves detector robustness. All resources are publicly available at https://github.com/HJJWorks/TempParaphraser.
Anthology ID:
2025.emnlp-main.1607
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
31542–31561
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1607/
DOI:
Bibkey:
Cite (ACL):
Junjie Huang, Ruiquan Zhang, Jinsong Su, and Yidong Chen. 2025. TempParaphraser: “Heating Up” Text to Evade AI-Text Detection through Paraphrasing. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 31542–31561, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
TempParaphraser: “Heating Up” Text to Evade AI-Text Detection through Paraphrasing (Huang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1607.pdf
Checklist:
 2025.emnlp-main.1607.checklist.pdf