SPADE: Structured Prompting Augmentation for Dialogue Enhancement in Machine-Generated Text Detection

Haoyi Li, Angela Yuan, Soyeon Han, Chirstopher Leckie


Abstract
The increasing capability of large language models (LLMs) to generate synthetic content has heightened concerns about their misuse, driving the development of Machine-Generated Text (MGT) detection models. However, these detectors face significant challenges due to the lack of high-quality synthetic datasets for training. To address this issue, we propose SPADE, a structured framework for detecting synthetic dialogues using prompt-based adversarial samples. Our proposed methods yield 14 new dialogue datasets, which we benchmark against eight MGT detection models. The results demonstrate improved generalization performance when utilizing a mixed dataset produced by proposed augmentation frameworks, offering a practical approach to enhancing LLM application security. Considering that real-world agents lack knowledge of future opponent utterances, we simulate online dialogue detection and examine the relationship between chat history length and detection accuracy. Our open-source datasets can be downloaded.
Anthology ID:
2025.llmsec-1.11
Volume:
Proceedings of the The First Workshop on LLM Security (LLMSEC)
Month:
August
Year:
2025
Address:
Vienna, Austria
Editor:
Jekaterina Novikova
Venues:
LLMSEC | WS
SIG:
SIGSEC
Publisher:
Association for Computational Linguistics
Note:
Pages:
142–167
Language:
URL:
https://preview.aclanthology.org/corrections-2025-08/2025.llmsec-1.11/
DOI:
Bibkey:
Cite (ACL):
Haoyi Li, Angela Yuan, Soyeon Han, and Chirstopher Leckie. 2025. SPADE: Structured Prompting Augmentation for Dialogue Enhancement in Machine-Generated Text Detection. In Proceedings of the The First Workshop on LLM Security (LLMSEC), pages 142–167, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
SPADE: Structured Prompting Augmentation for Dialogue Enhancement in Machine-Generated Text Detection (Li et al., LLMSEC 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/corrections-2025-08/2025.llmsec-1.11.pdf
Supplementarymaterial:
 2025.llmsec-1.11.SupplementaryMaterial.txt