Detection of AI-generated Content in Scientific Abstracts
Ernesto Luis Estevanell-Valladares, Alicia Picazo-Izquierdo, Ruslan Mitkov
Abstract
The growing use of generative AI in academic writing raises urgent questions about authorship and the integrity of scientific communication. This study addresses the detection of AI-generated scientific abstracts by constructing a temporally anchored dataset of paired abstracts—each with a human-written version that contains scientific abstracts of works published before 2021 and a synthetic version generated using GPT-4.1. We evaluate three approaches to authorship classification: zero-shot large language models (LLMs), fine-tuned encoder-based transformers, and traditional machine learning classifiers. Results show that LLMs perform near chance level, while a LoRA-fine-tuned DistilBERT and a PassiveAggressive classifier achieve near-perfect performance. These findings suggest that shallow lexical or stylistic patterns still differentiate human and AI writing, and that supervised learning is key to capturing these signals.- Anthology ID:
- 2025.r2lm-1.3
- Volume:
- Proceedings of the First Workshop on Comparative Performance Evaluation: From Rules to Language Models
- Month:
- September
- Year:
- 2025
- Address:
- Varna, Bulgaria
- Editors:
- Alicia Picazo-Izquierdo, Ernesto Luis Estevanell-Valladares, Ruslan Mitkov, Rafael Muñoz Guillena, Raúl García Cerdá
- Venues:
- R2LM | WS
- SIG:
- Publisher:
- INCOMA Ltd., Shoumen, Bulgaria
- Note:
- Pages:
- 21–29
- Language:
- URL:
- https://preview.aclanthology.org/corrections-2026-01/2025.r2lm-1.3/
- DOI:
- Cite (ACL):
- Ernesto Luis Estevanell-Valladares, Alicia Picazo-Izquierdo, and Ruslan Mitkov. 2025. Detection of AI-generated Content in Scientific Abstracts. In Proceedings of the First Workshop on Comparative Performance Evaluation: From Rules to Language Models, pages 21–29, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
- Cite (Informal):
- Detection of AI-generated Content in Scientific Abstracts (Estevanell-Valladares et al., R2LM 2025)
- PDF:
- https://preview.aclanthology.org/corrections-2026-01/2025.r2lm-1.3.pdf