Helena Mihaljević

Also published as: Helena Mihaljevic


2025

pdf bib
Debunking with Dialogue? Exploring AI-Generated Counterspeech to Challenge Conspiracy Theories
Mareike Lisker | Christina Gottschalk | Helena Mihaljević
Proceedings of the The 9th Workshop on Online Abuse and Harms (WOAH)

Counterspeech is a key strategy against harmful online content, but scaling expert-driven efforts is challenging. Large Language Models (LLMs) present a potential solution, though their use in countering conspiracy theories is under-researched. Unlike for hate speech, no datasets exist that pair conspiracy theory comments with expert-crafted counterspeech. We address this gap by evaluating the ability of GPT-4o, Llama 3, and Mistral to effectively apply counterspeech strategies derived from psychological research provided through structured prompts. Our results show that the models often generate generic, repetitive, or superficial results. Additionally, they over-acknowledge fear and frequently hallucinate facts, sources, or figures, making their prompt-based use in practical applications problematic.

2024

pdf bib
Detection of Conspiracy Theories Beyond Keyword Bias in German-Language Telegram Using Large Language Models
Milena Pustet | Elisabeth Steffen | Helena Mihaljevic
Proceedings of the 8th Workshop on Online Abuse and Harms (WOAH 2024)

The automated detection of conspiracy theories online typically relies on supervised learning. However, creating respective training data requires expertise, time and mental resilience, given the often harmful content. Moreover, available datasets are predominantly in English and often keyword-based, introducing a token-level bias into the models. Our work addresses the task of detecting conspiracy theories in German Telegram messages. We compare the performance of supervised fine-tuning approaches using BERT-like models with prompt-based approaches using Llama2, GPT-3.5, and GPT-4 which require little or no additional training data. We use a dataset of ∼4, 000 messages collected during the COVID-19 pandemic, without the use of keyword filters.Our findings demonstrate that both approaches can be leveraged effectively: For supervised fine-tuning, we report an F1 score of ∼ 0.8 for the positive class, making our model comparable to recent models trained on keyword-focused English corpora. We demonstrate our model’s adaptability to intra-domain temporal shifts, achieving F1 scores of ∼0.7. Among prompting variants, the best model is GPT-4, achieving an F1 score of ∼0.8 for the positive class in a zero-shot setting and equipped with a custom conspiracy theory definition.