Beyond Turing: A Comparative Analysis of Approaches for Detecting Machine-Generated Text

Muhammad Adilazuarda


Abstract
Significant progress has been made on text generation by pre-trained language models (PLMs), yet distinguishing between human and machine-generated text poses an escalating challenge. This paper offers an in-depth evaluation of three distinct methods used to address this task: traditional shallow learning, Language Model (LM) fine-tuning, and Multilingual Model fine-tuning. These approaches are rigorously tested on a wide range of machine-generated texts, providing a benchmark of their competence in distinguishing between human-authored and machine-authored linguistic constructs. The results reveal considerable differences in performance across methods, thus emphasizing the continued need for advancement in this crucial area of NLP. This study offers valuable insights and paves the way for future research aimed at creating robust and highly discriminative models.
Anthology ID:
2024.trustnlp-1.1
Volume:
Proceedings of the 4th Workshop on Trustworthy Natural Language Processing (TrustNLP 2024)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Anaelia Ovalle, Kai-Wei Chang, Yang Trista Cao, Ninareh Mehrabi, Jieyu Zhao, Aram Galstyan, Jwala Dhamala, Anoop Kumar, Rahul Gupta
Venues:
TrustNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–12
Language:
URL:
https://aclanthology.org/2024.trustnlp-1.1
DOI:
10.18653/v1/2024.trustnlp-1.1
Bibkey:
Cite (ACL):
Muhammad Adilazuarda. 2024. Beyond Turing: A Comparative Analysis of Approaches for Detecting Machine-Generated Text. In Proceedings of the 4th Workshop on Trustworthy Natural Language Processing (TrustNLP 2024), pages 1–12, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Beyond Turing: A Comparative Analysis of Approaches for Detecting Machine-Generated Text (Adilazuarda, TrustNLP-WS 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2024.trustnlp-1.1.pdf
Supplementary material:
 2024.trustnlp-1.1.SupplementaryMaterial.zip