Olivia Redfield
2023
A Keyword Based Approach to Understanding the Overpenalization of Marginalized Groups by English Marginal Abuse Models on Twitter
Kyra Yee
|
Alice Schoenauer Sebag
|
Olivia Redfield
|
Matthias Eck
|
Emily Sheng
|
Luca Belli
Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)
Harmful content detection models tend to have higher false positive rates for content from marginalized groups. In the context of marginal abuse modeling on Twitter, such disproportionate penalization poses the risk of reduced visibility, where marginalized communities lose the opportunity to voice their opinion on the platform. Current approaches to algorithmic harm mitigation, and bias detection for NLP models are often very ad hoc and subject to human bias. We make two main contributions in this paper. First, we design a novel methodology, which provides a principled approach to detecting and measuring the severity of potential harms associated with a text-based model. Second, we apply our methodology to audit Twitter’s English marginal abuse model, which is used for removing amplification eligibility of marginally abusive content. Without utilizing demographic labels or dialect classifiers, we are still able to detect and measure the severity of issues related to the over-penalization of the speech of marginalized communities, such as the use of reclaimed speech, counterspeech, and identity related terms. In order to mitigate the associated harms, we experiment with adding additional true negative examples and find that doing so provides improvements to our fairness metrics without large degradations in model performance.
2019
Natural Questions: A Benchmark for Question Answering Research
Tom Kwiatkowski
|
Jennimaria Palomaki
|
Olivia Redfield
|
Michael Collins
|
Ankur Parikh
|
Chris Alberti
|
Danielle Epstein
|
Illia Polosukhin
|
Jacob Devlin
|
Kenton Lee
|
Kristina Toutanova
|
Llion Jones
|
Matthew Kelcey
|
Ming-Wei Chang
|
Andrew M. Dai
|
Jakob Uszkoreit
|
Quoc Le
|
Slav Petrov
Transactions of the Association for Computational Linguistics, Volume 7
We present the Natural Questions corpus, a question answering data set. Questions consist of real anonymized, aggregated queries issued to the Google search engine. An annotator is presented with a question along with a Wikipedia page from the top 5 search results, and annotates a long answer (typically a paragraph) and a short answer (one or more entities) if present on the page, or marks null if no long/short answer is present. The public release consists of 307,373 training examples with single annotations; 7,830 examples with 5-way annotations for development data; and a further 7,842 examples with 5-way annotated sequestered as test data. We present experiments validating quality of the data. We also describe analysis of 25-way annotations on 302 examples, giving insights into human variability on the annotation task. We introduce robust metrics for the purposes of evaluating question answering systems; demonstrate high human upper bounds on these metrics; and establish baseline results using competitive methods drawn from related literature.
Search
Co-authors
- Kyra Yee 1
- Alice Schoenauer Sebag 1
- Matthias Eck 1
- Emily Sheng 1
- Luca Belli 1
- show all...