@inproceedings{marce-poliak-2022-gender,
    title = "On Gender Biases in Offensive Language Classification Models",
    author = "Marc{\'e}, Sanjana  and
      Poliak, Adam",
    editor = "Hardmeier, Christian  and
      Basta, Christine  and
      Costa-juss{\`a}, Marta R.  and
      Stanovsky, Gabriel  and
      Gonen, Hila",
    booktitle = "Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP)",
    month = jul,
    year = "2022",
    address = "Seattle, Washington",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2022.gebnlp-1.19/",
    doi = "10.18653/v1/2022.gebnlp-1.19",
    pages = "174--183",
    abstract = "We explore whether neural Natural Language Processing models trained to identify offensive language in tweets contain gender biases. We add historically gendered and gender ambiguous American names to an existing offensive language evaluation set to determine whether models? predictions are sensitive or robust to gendered names. While we see some evidence that these models might be prone to biased stereotypes that men use more offensive language than women, our results indicate that these models? binary predictions might not greatly change based upon gendered names."
}Markdown (Informal)
[On Gender Biases in Offensive Language Classification Models](https://preview.aclanthology.org/ingest-emnlp/2022.gebnlp-1.19/) (Marcé & Poliak, GeBNLP 2022)
ACL