Attacking Misinformation Detection Using Adversarial Examples Generated by Language Models

Piotr Przybyła, Euan McGill, Horacio Saggion


Abstract
Large language models have many beneficial applications, but can they also be used to attack content-filtering algorithms in social media platforms? We investigate the challenge of generating adversarial examples to test the robustness of text classification algorithms detecting low-credibility content, including propaganda, false claims, rumours and hyperpartisan news. We focus on simulation of content moderation by setting realistic limits on the number of queries an attacker is allowed to attempt. Within our solution (TREPAT), initial rephrasings are generated by large language models with prompts inspired by meaning-preserving NLP tasks, such as text simplification and style transfer. Subsequently, these modifications are decomposed into small changes, applied through beam search procedure, until the victim classifier changes its decision. We perform (1) quantitative evaluation using various prompts, models and query limits, (2) targeted manual assessment of the generated text and (3) qualitative linguistic analysis. The results confirm the superiority of our approach in the constrained scenario, especially in case of long input text (news articles), where exhaustive search is not feasible.
Anthology ID:
2025.emnlp-main.1405
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
27614–27630
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1405/
DOI:
Bibkey:
Cite (ACL):
Piotr Przybyła, Euan McGill, and Horacio Saggion. 2025. Attacking Misinformation Detection Using Adversarial Examples Generated by Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 27614–27630, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Attacking Misinformation Detection Using Adversarial Examples Generated by Language Models (Przybyła et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1405.pdf
Checklist:
 2025.emnlp-main.1405.checklist.pdf