From Text to Source: Results in Detecting Large Language Model-Generated Content

Wissam Antoun, Benoît Sagot, Djamé Seddah


Abstract
The widespread use of Large Language Models (LLMs), celebrated for their ability to generate human-like text, has raised concerns about misinformation and ethical implications. Addressing these concerns necessitates the development of robust methods to detect and attribute text generated by LLMs. This paper investigates “Cross-Model Detection,” by evaluating whether a classifier trained to distinguish between source LLM-generated and human-written text can also detect text from a target LLM without further training. The study comprehensively explores various LLM sizes and families and assesses the impact of conversational fine-tuning techniques, quantization, and watermarking on classifier generalization. The research also explores Model Attribution, encompassing source model identification, model family, and model size classification, in addition to quantization and watermarking detection. Our results reveal several key findings: a clear inverse relationship between classifier effectiveness and model size, with larger LLMs being more challenging to detect, especially when the classifier is trained on data from smaller models. Training on data from similarly sized LLMs can improve detection performance from larger models but may lead to decreased performance when dealing with smaller models. Additionally, model attribution experiments show promising results in identifying source models and model families, highlighting detectable signatures in LLM-generated text, with particularly remarkable outcomes in watermarking detection, while no detectable signatures of quantization were observed. Overall, our study contributes valuable insights into the interplay of model size, family, and training data in LLM detection and attribution.
Anthology ID:
2024.lrec-main.665
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
7531–7543
Language:
URL:
https://aclanthology.org/2024.lrec-main.665
DOI:
Bibkey:
Cite (ACL):
Wissam Antoun, Benoît Sagot, and Djamé Seddah. 2024. From Text to Source: Results in Detecting Large Language Model-Generated Content. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 7531–7543, Torino, Italia. ELRA and ICCL.
Cite (Informal):
From Text to Source: Results in Detecting Large Language Model-Generated Content (Antoun et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/proper-vol2-ingestion/2024.lrec-main.665.pdf
Optional supplementary material:
 2024.lrec-main.665.OptionalSupplementaryMaterial.zip