Ali Omrani
2021
Improving Counterfactual Generation for Fair Hate Speech Detection
Aida Mostafazadeh Davani
|
Ali Omrani
|
Brendan Kennedy
|
Mohammad Atari
|
Xiang Ren
|
Morteza Dehghani
Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)
Bias mitigation approaches reduce models’ dependence on sensitive features of data, such as social group tokens (SGTs), resulting in equal predictions across the sensitive features. In hate speech detection, however, equalizing model predictions may ignore important differences among targeted social groups, as hate speech can contain stereotypical language specific to each SGT. Here, to take the specific language about each SGT into account, we rely on counterfactual fairness and equalize predictions among counterfactuals, generated by changing the SGTs. Our method evaluates the similarity in sentence likelihoods (via pre-trained language models) among counterfactuals, to treat SGTs equally only within interchangeable contexts. By applying logit pairing to equalize outcomes on the restricted set of counterfactuals for each instance, we improve fairness metrics while preserving model performance on hate speech detection.