Desmond Patton


2023

pdf
Evaluation of African American Language Bias in Natural Language Generation
Nicholas Deas | Jessica Grieser | Shana Kleiner | Desmond Patton | Elsbeth Turcan | Kathleen McKeown
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

While biases disadvantaging African American Language (AAL) have been uncovered in models for tasks such as speech recognition and toxicity detection, there has been little investigation of these biases for language generation models like ChatGPT. We evaluate how well LLMs understand AAL in comparison to White Mainstream English (WME), the encouraged “standard” form of English taught in American classrooms. We measure large language model performance on two tasks: a counterpart generation task, where a model generates AAL given WME and vice versa, and a masked span prediction (MSP) task, where models predict a phrase hidden from their input. Using a novel dataset of AAL texts from a variety of regions and contexts, we present evidence of dialectal bias for six pre-trained LLMs through performance gaps on these tasks.

2022

pdf
Mitigating Covertly Unsafe Text within Natural Language Systems
Alex Mei | Anisha Kabir | Sharon Levy | Melanie Subbiah | Emily Allaway | John Judge | Desmond Patton | Bruce Bimber | Kathleen McKeown | William Yang Wang
Findings of the Association for Computational Linguistics: EMNLP 2022

An increasingly prevalent problem for intelligent technologies is text safety, as uncontrolled systems may generate recommendations to their users that lead to injury or life-threatening consequences. However, the degree of explicitness of a generated statement that can cause physical harm varies. In this paper, we distinguish types of text that can lead to physical harm and establish one particularly underexplored category: covertly unsafe text. Then, we further break down this category with respect to the system’s information and discuss solutions to mitigate the generation of text in each of these subcategories. Ultimately, our work defines the problem of covertly unsafe language that causes physical harm and argues that this subtle yet dangerous issue needs to be prioritized by stakeholders and regulators. We highlight mitigation strategies to inspire future researchers to tackle this challenging problem and help improve safety within smart systems.

pdf
SafeText: A Benchmark for Exploring Physical Safety in Language Models
Sharon Levy | Emily Allaway | Melanie Subbiah | Lydia Chilton | Desmond Patton | Kathleen McKeown | William Yang Wang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Understanding what constitutes safe text is an important issue in natural language processing and can often prevent the deployment of models deemed harmful and unsafe. One such type of safety that has been scarcely studied is commonsense physical safety, i.e. text that is not explicitly violent and requires additional commonsense knowledge to comprehend that it leads to physical harm. We create the first benchmark dataset, SafeText, comprising real-life scenarios with paired safe and physically unsafe pieces of advice. We utilize SafeText to empirically study commonsense physical safety across various models designed for text generation and commonsense reasoning tasks. We find that state-of-the-art large language models are susceptible to the generation of unsafe text and have difficulty rejecting unsafe advice. As a result, we argue for further studies of safety and the assessment of commonsense physical safety in models before release.

2019

pdf
Detecting and Reducing Bias in a High Stakes Domain
Ruiqi Zhong | Yanda Chen | Desmond Patton | Charlotte Selous | Kathy McKeown
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

Gang-involved youth in cities such as Chicago sometimes post on social media to express their aggression towards rival gangs and previous research has demonstrated that a deep learning approach can predict aggression and loss in posts. To address the possibility of bias in this sensitive application, we developed an approach to systematically interpret the state of the art model. We found, surprisingly, that it frequently bases its predictions on stop words such as “a” or “on”, an approach that could harm social media users who have no aggressive intentions. To tackle this bias, domain experts annotated the rationales, highlighting words that explain why a tweet is labeled as “aggression”. These new annotations enable us to quantitatively measure how justified the model predictions are, and build models that drastically reduce bias. Our study shows that in high stake scenarios, accuracy alone cannot guarantee a good system and we need new evaluation methods.

2018

pdf
Detecting Gang-Involved Escalation on Social Media Using Context
Serina Chang | Ruiqi Zhong | Ethan Adams | Fei-Tzin Lee | Siddharth Varia | Desmond Patton | William Frey | Chris Kedzie | Kathy McKeown
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Gang-involved youth in cities such as Chicago have increasingly turned to social media to post about their experiences and intents online. In some situations, when they experience the loss of a loved one, their online expression of emotion may evolve into aggression towards rival gangs and ultimately into real-world violence. In this paper, we present a novel system for detecting Aggression and Loss in social media. Our system features the use of domain-specific resources automatically derived from a large unlabeled corpus, and contextual representations of the emotional and semantic content of the user’s recent tweets as well as their interactions with other users. Incorporating context in our Convolutional Neural Network (CNN) leads to a significant improvement.

2016

pdf
Automatically Processing Tweets from Gang-Involved Youth: Towards Detecting Loss and Aggression
Terra Blevins | Robert Kwiatkowski | Jamie MacBeth | Kathleen McKeown | Desmond Patton | Owen Rambow
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Violence is a serious problems for cities like Chicago and has been exacerbated by the use of social media by gang-involved youths for taunting rival gangs. We present a corpus of tweets from a young and powerful female gang member and her communicators, which we have annotated with discourse intention, using a deep read to understand how and what triggered conversations to escalate into aggression. We use this corpus to develop a part-of-speech tagger and phrase table for the variant of English that is used and a classifier for identifying tweets that express grieving and aggression.