Jan Fillies


2025

pdf bib
Improving Hate Speech Classification with Cross-Taxonomy Dataset Integration
Jan Fillies | Adrian Paschke
Proceedings of the 9th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature (LaTeCH-CLfL 2025)

Algorithmic hate speech detection faces significant challenges due to the diverse definitions and datasets used in research and practice. Social media platforms, legal frameworks, and institutions each apply distinct yet overlapping definitions, complicating classification efforts. This study addresses these challenges by demonstrating that existing datasets and taxonomies can be integrated into a unified model, enhancing prediction performance and reducing reliance on multiple specialized classifiers. The work introduces a universal taxonomy and a hate speech classifier capable of detecting a wide range of definitions within a single framework. Our approach is validated by combining two widely used but differently annotated datasets, showing improved classification performance on an independent test set. This work highlights the potential of dataset and taxonomy integration in advancing hate speech detection, increasing efficiency, and ensuring broader applicability across contexts.

pdf bib
A Comprehensive Taxonomy of Bias Mitigation Methods for Hate Speech Detection
Jan Fillies | Marius Wawerek | Adrian Paschke
Proceedings of the The 9th Workshop on Online Abuse and Harms (WOAH)

Algorithmic hate speech detection is widely used today. However, biases within these systems can lead to discrimination. This research presents an overview of bias mitigation strategies in the field of hate speech detection. The identified principles are grouped into four categories, based on their operation principles. A novel taxonomy of bias mitigation methods is proposed. The mitigation strategies are characterized based on their key concepts and analyzed in terms of their application stage and their need for knowledge of protected attributes. Additionally, the paper discusses potential combinations of these strategies. This research shifts the focus from identifying present biases to examining the similarities and differences between mitigation strategies, thereby facilitating the exchange, stacking, and ensembling of these strategies in future research.

2024

pdf bib
Simple LLM based Approach to Counter Algospeak
Jan Fillies | Adrian Paschke
Proceedings of the 8th Workshop on Online Abuse and Harms (WOAH 2024)

With the use of algorithmic moderation on online communication platforms, an increase in adaptive language aiming to evade the automatic detection of problematic content has been observed. One form of this adapted language is known as “Algospeak” and is most commonly associated with large social media platforms, e.g., TikTok. It builds upon Leetspeak or online slang with its explicit intention to avoid machine readability. The machine-learning algorithms employed to automate the process of content moderation mostly rely on human-annotated datasets and supervised learning, often not adjusted for a wide variety of languages and changes in language. This work uses linguistic examples identified in research literature to introduce a taxonomy for Algospeak and shows that with the use of an LLM (GPT-4), 79.4% of the established terms can be corrected to their true form, or if needed, their underlying associated concepts. With an example sentence, 98.5% of terms are correctly identified. This research demonstrates that LLMs are the future in solving the current problem of moderation avoidance by Algospeak.