Spotlights and Blindspots: Evaluating Machine-Generated Text Detection

Kevin Stowe, Kailash Patil


Abstract
With the rise of generative language models, machine-generated text detection has become a critical challenge. A wide variety of models is available, but inconsistent datasets, evaluation metrics, and assessment strategies obscure comparisons of model effectiveness. To address this, we evaluate 15 different detection models from six distinct systems, as well as seven trained models, across seven English-language textual test sets and three creative human-written datasets. We provide an empirical analysis of model performance, the influence of training and evaluation data, and the impact of key metrics. We find that no single system excels in all areas and nearly all are effective for certain tasks, and the representation of model performance is critically linked to dataset and metric choices. We find high variance in model ranks based on datasets and metrics, and overall poor performance on novel human-written texts in high-risk domains. Across datasets and metrics, we find that methodological choices that are often assumed or overlooked are essential for clearly and accurately reflecting model performance.
Anthology ID:
2026.lrec-main.329
Volume:
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Month:
May
Year:
2026
Address:
Palma de Mallorca, Spain
Editors:
Stelios Piperidis, Núria Bel, Henk van den Heuvel, Nancy Ide, Simon Krek, Antonio Toral
Venue:
LREC
SIG:
Publisher:
ELRA Language Resource Association
Note:
Pages:
4173–4187
Language:
URL:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.329/
DOI:
Bibkey:
Cite (ACL):
Kevin Stowe and Kailash Patil. 2026. Spotlights and Blindspots: Evaluating Machine-Generated Text Detection. International Conference on Language Resources and Evaluation, main:4173–4187.
Cite (Informal):
Spotlights and Blindspots: Evaluating Machine-Generated Text Detection (Stowe & Patil, LREC 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.329.pdf