Susan Gauch


2025

pdf bib
NUTMEG: Separating Signal From Noise in Annotator Disagreement
Jonathan Ivey | Susan Gauch | David Jurgens
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

NLP models often rely on human-labeled data for training and evaluation. Many approaches crowdsource this data from a large number of annotators with varying skills, backgrounds, and motivations, resulting in conflicting annotations. These conflicts have traditionally been resolved by aggregation methods that assume disagreements are errors. Recent work has argued that for many tasks annotators may have genuine disagreements and that variation should be treated as signal rather than noise. However, few models separate signal and noise in annotator disagreement. In this work, we introduce NUTMEG, a new Bayesian model that incorporates information about annotator backgrounds to remove noisy annotations from human-labeled training data while preserving systematic disagreements. Using synthetic and real-world data, we show that NUTMEG is more effective at recovering ground-truth from annotations with systematic disagreement than traditional aggregation methods, and we demonstrate that downstream models trained on NUTMEG-aggregated data significantly outperform models trained on data from traditionally aggregation methods. We provide further analysis characterizing how differences in subpopulation sizes, rates of disagreement, and rates of spam affect the performance of our model. Our results highlight the importance of accounting for both annotator competence and systematic disagreements when training on human-labeled data.

2024

pdf bib
Using Sarcasm to Improve Cyberbullying Detection
Xiaoyu Guo | Susan Gauch
Proceedings of the Fourth Workshop on Threat, Aggression & Cyberbullying @ LREC-COLING-2024

Cyberbullying has become more prevalent over time, especially towards minority groups, and online human moderators cannot detect cyberbullying content efficiently. Prior work has addressed this problem by detecting cyberbullying with deep learning approaches. In this project, we compare several BERT-based benchmark methods for cyberbullying detection and do a failure analysis to see where the model fails to correctly identify cyberbullying. We find that many falsely classified texts are sarcastic, so we propose a method to mitigate the false classifications by incorporating neural network-based sarcasm detection. We define a simple multilayer perceptron (MLP) that incorpo- rates sarcasm detection in the final cyberbully classifications and demonstrate improvement over benchmark methods.

2023

pdf bib
Improving Cross-Domain Hate Speech Generalizability with Emotion Knowledge
Shi Yin Hong | Susan Gauch
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation

1993

pdf bib
Experiments in Syntactic and Semantic Classification and Disambiguation using Bootstrapping
Robert Futrelle | Susan Gauch
Acquisition of Lexical Knowledge from Text