Jeffrey Sorensen

Also published as: Jeffrey S. Sorensen


2023

pdf
Harmful Language Datasets: An Assessment of Robustness
Katerina Korre | John Pavlopoulos | Jeffrey Sorensen | Léo Laugier | Ion Androutsopoulos | Lucas Dixon | Alberto Barrón-cedeño
The 7th Workshop on Online Abuse and Harms (WOAH)

The automated detection of harmful language has been of great importance for the online world, especially with the growing importance of social media and, consequently, polarisation. There are many open challenges to high quality detection of harmful text, from dataset creation to generalisable application, thus calling for more systematic studies. In this paper, we explore re-annotation as a means of examining the robustness of already existing labelled datasets, showing that, despite using alternative definitions, the inter-annotator agreement remains very inconsistent, highlighting the intrinsically subjective and variable nature of the task. In addition, we build automatic toxicity detectors using the existing datasets, with their original labels, and we evaluate them on our multi-definition and multi-source datasets. Surprisingly, while other studies show that hate speech detection models perform better on data that are derived from the same distribution as the training set, our analysis demonstrates this is not necessarily true.

pdf
JUAGE at SemEval-2023 Task 10: Parameter Efficient Classification
Jeffrey Sorensen | Katerina Korre | John Pavlopoulos | Katrin Tomanek | Nithum Thain | Lucas Dixon | Léo Laugier
Proceedings of the The 17th International Workshop on Semantic Evaluation (SemEval-2023)

Using pre-trained language models to implement classifiers from small to modest amounts of training data is an area of active research. The ability of large language models to generalize from few-shot examples and to produce strong classifiers is extended using the engineering approach of parameter-efficient tuning. Using the Explainable Detection of Online Sexism (EDOS) training data and a small number of trainable weights to create a tuned prompt vector, a competitive model for this task was built, which was top-ranked in Subtask B.

2022

pdf
Lost in Distillation: A Case Study in Toxicity Modeling
Alyssa Chvasta | Alyssa Lees | Jeffrey Sorensen | Lucy Vasserman | Nitesh Goyal
Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)

In an era of increasingly large pre-trained language models, knowledge distillation is a powerful tool for transferring information from a large model to a smaller one. In particular, distillation is of tremendous benefit when it comes to real-world constraints such as serving latency or serving at scale. However, a loss of robustness in language understanding may be hidden in the process and not immediately revealed when looking at high-level evaluation metrics. In this work, we investigate the hidden costs: what is “lost in distillation”, especially in regards to identity-based model bias using the case study of toxicity modeling. With reproducible models using open source training sets, we investigate models distilled from a BERT teacher baseline. Using both open source and proprietary big data models, we investigate these hidden performance costs.

pdf
SemEval-2022 Task 5: Multimedia Automatic Misogyny Identification
Elisabetta Fersini | Francesca Gasparini | Giulia Rizzi | Aurora Saibene | Berta Chulvi | Paolo Rosso | Alyssa Lees | Jeffrey Sorensen
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)

The paper describes the SemEval-2022 Task 5: Multimedia Automatic Misogyny Identification (MAMI),which explores the detection of misogynous memes on the web by taking advantage of available texts and images. The task has been organised in two related sub-tasks: the first one is focused on recognising whether a meme is misogynous or not (Sub-task A), while the second one is devoted to recognising types of misogyny (Sub-task B). MAMI has been one of the most popular tasks at SemEval-2022 with more than 400 participants, 65 teams involved in Sub-task A and 41 in Sub-task B from 13 countries. The MAMI challenge received 4214 submitted runs (of which 166 uploaded on the leader-board), denoting an enthusiastic participation for the proposed problem.The collection and annotation is described for the task dataset.The paper provides an overview of the systems proposed for the challenge, reports the results achieved in both sub-tasks and outlines a description of the main errors for a comprehension of the systems capabilities and for detailing future research perspectives.

pdf
From the Detection of Toxic Spans in Online Discussions to the Analysis of Toxic-to-Civil Transfer
John Pavlopoulos | Leo Laugier | Alexandros Xenos | Jeffrey Sorensen | Ion Androutsopoulos
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We study the task of toxic spans detection, which concerns the detection of the spans that make a text toxic, when detecting such spans is possible. We introduce a dataset for this task, ToxicSpans, which we release publicly. By experimenting with several methods, we show that sequence labeling models perform best, but methods that add generic rationale extraction mechanisms on top of classifiers trained to predict if a post is toxic or not are also surprisingly promising. Finally, we use ToxicSpans and systems trained on it, to provide further analysis of state-of-the-art toxic to non-toxic transfer systems, as well as of human performance on that latter task. Our work highlights challenges in finer toxicity detection and mitigation.

2021

pdf
Civil Rephrases Of Toxic Texts With Self-Supervised Transformers
Léo Laugier | John Pavlopoulos | Jeffrey Sorensen | Lucas Dixon
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Platforms that support online commentary, from social networks to news sites, are increasingly leveraging machine learning to assist their moderation efforts. But this process does not typically provide feedback to the author that would help them contribute according to the community guidelines. This is prohibitively time-consuming for human moderators to do, and computational approaches are still nascent. This work focuses on models that can help suggest rephrasings of toxic comments in a more civil manner. Inspired by recent progress in unpaired sequence-to-sequence tasks, a self-supervised learning model is introduced, called CAE-T5. CAE-T5 employs a pre-trained text-to-text transformer, which is fine tuned with a denoising and cyclic auto-encoder loss. Experimenting with the largest toxicity detection dataset to date (Civil Comments) our model generates sentences that are more fluent and better at preserving the initial content compared to earlier text style transfer systems which we compare with using several scoring systems and human evaluation.

pdf
SemEval-2021 Task 5: Toxic Spans Detection
John Pavlopoulos | Jeffrey Sorensen | Léo Laugier | Ion Androutsopoulos
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)

The Toxic Spans Detection task of SemEval-2021 required participants to predict the spans of toxic posts that were responsible for the toxic label of the posts. The task could be addressed as supervised sequence labeling, using training data with gold toxic spans provided by the organisers. It could also be treated as rationale extraction, using classifiers trained on potentially larger external datasets of posts manually annotated as toxic or not, without toxic span annotations. For the supervised sequence labeling approach and evaluation purposes, posts previously labeled as toxic were crowd-annotated for toxic spans. Participants submitted their predicted spans for a held-out test set and were scored using character-based F1. This overview summarises the work of the 36 teams that provided system descriptions.

2020

pdf
Six Attributes of Unhealthy Conversations
Ilan Price | Jordan Gifford-Moore | Jory Flemming | Saul Musker | Maayan Roichman | Guillaume Sylvain | Nithum Thain | Lucas Dixon | Jeffrey Sorensen
Proceedings of the Fourth Workshop on Online Abuse and Harms

We present a new dataset of approximately 44000 comments labeled by crowdworkers. Each comment is labelled as either ‘healthy’ or ‘unhealthy’, in addition to binary labels for the presence of six potentially ‘unhealthy’ sub-attributes: (1) hostile; (2) antagonistic, insulting, provocative or trolling; (3) dismissive; (4) condescending or patronising; (5) sarcastic; and/or (6) an unfair generalisation. Each label also has an associated confidence score. We argue that there is a need for datasets which enable research based on a broad notion of ‘unhealthy online conversation’. We build this typology to encompass a substantial proportion of the individual comments which contribute to unhealthy online conversation. For some of these attributes, this is the first publicly available dataset of this scale. We explore the quality of the dataset, present some summary statistics and initial models to illustrate the utility of this data, and highlight limitations and directions for further research.

pdf
Toxicity Detection: Does Context Really Matter?
John Pavlopoulos | Jeffrey Sorensen | Lucas Dixon | Nithum Thain | Ion Androutsopoulos
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Moderation is crucial to promoting healthy online discussions. Although several ‘toxicity’ detection datasets and models have been published, most of them ignore the context of the posts, implicitly assuming that comments may be judged independently. We investigate this assumption by focusing on two questions: (a) does context affect the human judgement, and (b) does conditioning on context improve performance of toxicity detection systems? We experiment with Wikipedia conversations, limiting the notion of context to the previous post in the thread and the discussion title. We find that context can both amplify or mitigate the perceived toxicity of posts. Moreover, a small but significant subset of manually labeled posts (5% in one of our experiments) end up having the opposite toxicity labels if the annotators are not provided with context. Surprisingly, we also find no evidence that context actually improves the performance of toxicity classifiers, having tried a range of classifiers and mechanisms to make them context aware. This points to the need for larger datasets of comments annotated in context. We make our code and data publicly available.

2012

pdf
The OpenGrm open-source finite-state grammar software libraries
Brian Roark | Richard Sproat | Cyril Allauzen | Michael Riley | Jeffrey Sorensen | Terry Tai
Proceedings of the ACL 2012 System Demonstrations

2010

pdf
Syntax Based Reordering with Automatically Derived Rules for Improved Statistical Machine Translation
Karthik Visweswariah | Jiri Navratil | Jeffrey Sorensen | Vijil Chenthamarakshan | Nandakishore Kambhatla
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

2006

pdf
Maximum Entropy Based Restoration of Arabic Diacritics
Imed Zitouni | Jeffrey S. Sorensen | Ruhi Sarikaya
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

2005

pdf
The Impact of Morphological Stemming on Arabic Mention Detection and Coreference Resolution
Imed Zitouni | Jeffrey Sorensen | Xiaoqiang Luo | Radu Florian
Proceedings of the ACL Workshop on Computational Approaches to Semitic Languages

pdf
An Integrated Approach for Arabic-English Named Entity Translation
Hany Hassan | Jeffrey Sorensen
Proceedings of the ACL Workshop on Computational Approaches to Semitic Languages

2004

pdf
Dependency Tree Kernels for Relation Extraction
Aron Culotta | Jeffrey Sorensen
Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)

2003

pdf bib
TIPS: A Translingual Information Processing System
Yaser Al-Onaizan | Radu Florian | Martin Franz | Hany Hassan | Young-Suk Lee | J. Scott McCarley | Kishore Papineni | Salim Roukos | Jeffrey Sorensen | Christoph Tillmann | Todd Ward | Fei Xia
Companion Volume of the Proceedings of HLT-NAACL 2003 - Demonstrations