Ivan Vykopal


2024

pdf
ExU: AI Models for Examining Multilingual Disinformation Narratives and Understanding their Spread
Jake Vasilakes | Zhixue Zhao | Michal Gregor | Ivan Vykopal | Martin Hyben | Carolina Scarton
Proceedings of the 25th Annual Conference of the European Association for Machine Translation (Volume 2)

Addressing online disinformation requires analysing narratives across languages to help fact-checkers and journalists sift through large amounts of data. The ExU project focuses on developing AI-based models for multilingual disinformation analysis, addressing the tasks of rumour stance classification and claim retrieval. We describe the ExU project proposal and summarise the results of a user requirements survey regarding the design of tools to support fact-checking.

pdf
Disinformation Capabilities of Large Language Models
Ivan Vykopal | Matúš Pikuliak | Ivan Srba | Robert Moro | Dominik Macko | Maria Bielikova
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Automated disinformation generation is often listed as one of the risks of large language models (LLMs). The theoretical ability to flood the information space with disinformation content might have dramatic consequences for democratic societies around the world. This paper presents a comprehensive study of the disinformation capabilities of the current generation of LLMs to generate false news articles in English language. In our study, we evaluated the capabilities of 10 LLMs using 20 disinformation narratives. We evaluated several aspects of the LLMs: how well they are at generating news articles, how strongly they tend to agree or disagree with the disinformation narratives, how often they generate safety warnings, etc. We also evaluated the abilities of detection models to detect these articles as LLM-generated. We conclude that LLMs are able to generate convincing news articles that agree with dangerous disinformation narratives.

2023

pdf
Multilingual Previously Fact-Checked Claim Retrieval
Matúš Pikuliak | Ivan Srba | Robert Moro | Timo Hromadka | Timotej Smoleň | Martin Melišek | Ivan Vykopal | Jakub Simko | Juraj Podroužek | Maria Bielikova
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Fact-checkers are often hampered by the sheer amount of online content that needs to be fact-checked. NLP can help them by retrieving already existing fact-checks relevant to the content being investigated. This paper introduces a new multilingual dataset for previously fact-checked claim retrieval. We collected 28k posts in 27 languages from social media, 206k fact-checks in 39 languages written by professional fact-checkers, as well as 31k connections between these two groups. This is the most extensive and the most linguistically diverse dataset of this kind to date. We evaluated how different unsupervised methods fare on this dataset and its various dimensions. We show that evaluating such a diverse dataset has its complexities and proper care needs to be taken before interpreting the results. We also evaluated a supervised fine-tuning approach, improving upon the unsupervised method significantly.