Maximilian Wich


2021

pdf
Understanding and Interpreting the Impact of User Context in Hate Speech Detection
Edoardo Mosca | Maximilian Wich | Georg Groh
Proceedings of the Ninth International Workshop on Natural Language Processing for Social Media

As hate speech spreads on social media and online communities, research continues to work on its automatic detection. Recently, recognition performance has been increasing thanks to advances in deep learning and the integration of user features. This work investigates the effects that such features can have on a detection model. Unlike previous research, we show that simple performance comparison does not expose the full impact of including contextual- and user information. By leveraging explainability techniques, we show (1) that user features play a role in the model’s decision and (2) how they affect the feature space learned by the model. Besides revealing that—and also illustrating why—user features are the reason for performance gains, we show how such techniques can be combined to better understand the model and to detect unintended bias.

pdf bib
End-to-End Annotator Bias Approximation on Crowdsourced Single-Label Sentiment Analysis
Gerhard Hagerer | David Szabo | Andreas Koch | Maria Luisa Ripoll Dominguez | Christian Widmer | Maximilian Wich | Hannah Danner | Georg Groh
Proceedings of the 4th International Conference on Natural Language and Speech Processing (ICNLSP 2021)

pdf
German Abusive Language Dataset with Focus on COVID-19
Maximilian Wich | Svenja Räther | Georg Groh
Proceedings of the 17th Conference on Natural Language Processing (KONVENS 2021)

pdf
Investigating Annotator Bias in Abusive Language Datasets
Maximilian Wich | Christian Widmer | Gerhard Hagerer | Georg Groh
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021)

Nowadays, social media platforms use classification models to cope with hate speech and abusive language. The problem of these models is their vulnerability to bias. A prevalent form of bias in hate speech and abusive language datasets is annotator bias caused by the annotator’s subjective perception and the complexity of the annotation task. In our paper, we develop a set of methods to measure annotator bias in abusive language datasets and to identify different perspectives on abusive language. We apply these methods to four different abusive language datasets. Our proposed approach supports annotation processes of such datasets and future research addressing different perspectives on the perception of abusive language.

2020

pdf
Impact of Politically Biased Data on Hate Speech Classification
Maximilian Wich | Jan Bauer | Georg Groh
Proceedings of the Fourth Workshop on Online Abuse and Harms

One challenge that social media platforms are facing nowadays is hate speech. Hence, automatic hate speech detection has been increasingly researched in recent years - in particular with the rise of deep learning. A problem of these models is their vulnerability to undesirable bias in training data. We investigate the impact of political bias on hate speech classification by constructing three politically-biased data sets (left-wing, right-wing, politically neutral) and compare the performance of classifiers trained on them. We show that (1) political bias negatively impairs the performance of hate speech classifiers and (2) an explainable machine learning model can help to visualize such bias within the training data. The results show that political bias in training data has an impact on hate speech classification and can become a serious issue.

pdf
Identifying and Measuring Annotator Bias Based on Annotators’ Demographic Characteristics
Hala Al Kuwatly | Maximilian Wich | Georg Groh
Proceedings of the Fourth Workshop on Online Abuse and Harms

Machine learning is recently used to detect hate speech and other forms of abusive language in online platforms. However, a notable weakness of machine learning models is their vulnerability to bias, which can impair their performance and fairness. One type is annotator bias caused by the subjective perception of the annotators. In this work, we investigate annotator bias using classification models trained on data from demographically distinct annotator groups. To do so, we sample balanced subsets of data that are labeled by demographically distinct annotators. We then train classifiers on these subsets, analyze their performances on similarly grouped test sets, and compare them statistically. Our findings show that the proposed approach successfully identifies bias and that demographic features, such as first language, age, and education, correlate with significant performance differences.

pdf
Investigating Annotator Bias with a Graph-Based Approach
Maximilian Wich | Hala Al Kuwatly | Georg Groh
Proceedings of the Fourth Workshop on Online Abuse and Harms

A challenge that many online platforms face is hate speech or any other form of online abuse. To cope with this, hate speech detection systems are developed based on machine learning to reduce manual work for monitoring these platforms. Unfortunately, machine learning is vulnerable to unintended bias in training data, which could have severe consequences, such as a decrease in classification performance or unfair behavior (e.g., discriminating minorities). In the scope of this study, we want to investigate annotator bias — a form of bias that annotators cause due to different knowledge in regards to the task and their subjective perception. Our goal is to identify annotation bias based on similarities in the annotation behavior from annotators. To do so, we build a graph based on the annotations from the different annotators, apply a community detection algorithm to group the annotators, and train for each group classifiers whose performances we compare. By doing so, we are able to identify annotator bias within a data set. The proposed method and collected insights can contribute to developing fairer and more reliable hate speech classification models.