Amulya Yadav
2025
Have LLMs Reopened the Pandora’s Box of AI-Generated Fake News?
Xinyu Wang
|
Wenbo Zhang
|
Sai Koneru
|
Hangzhi Guo
|
Bonam Mingole
|
S. Shyam Sundar
|
Sarah Rajtmajer
|
Amulya Yadav
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
With the rise of AI-generated content spewed at scale from large language models (LLMs), genuine concerns about the spread of fake news have intensified. The perceived ability of LLMs to produce convincing fake news at scale poses new challenges for both human and automated fake news detection systems. To address this gap, this paper presents the findings from a university-level competition that aimed to explore how LLMs can be used by humans to create fake news, and to assess the ability of human annotators and AI models to detect it. A total of 110 participants used LLMs to create 252 unique fake news stories, and 84 annotators participated in the detection tasks. Our findings indicate that LLMs are ~68% more effective at detecting real news than humans. However, for fake news detection, the performance of LLMs and humans remains comparable (~60% accuracy). Additionally, we examine the impact of visual elements (e.g., pictures) in news on the accuracy of detecting fake news stories. Finally, we also examine various strategies used by fake news creators to enhance the credibility of their AI-generated content. This work highlights the increasing complexity of detecting AI-generated fake news, particularly in collaborative human-AI settings.
Search
Fix data
Co-authors
- Hangzhi Guo 1
- Sai Koneru 1
- Bonam Mingole 1
- Sarah Rajtmajer 1
- S. Shyam Sundar 1
- show all...