Umang Gupta


2022

pdf
Mitigating Gender Bias in Distilled Language Models via Counterfactual Role Reversal
Umang Gupta | Jwala Dhamala | Varun Kumar | Apurv Verma | Yada Pruksachatkun | Satyapriya Krishna | Rahul Gupta | Kai-Wei Chang | Greg Ver Steeg | Aram Galstyan
Findings of the Association for Computational Linguistics: ACL 2022

Language models excel at generating coherent text, and model compression techniques such as knowledge distillation have enabled their use in resource-constrained settings. However, these models can be biased in multiple ways, including the unfounded association of male and female genders with gender-neutral professions. Therefore, knowledge distillation without any fairness constraints may preserve or exaggerate the teacher model’s biases onto the distilled model. To this end, we present a novel approach to mitigate gender disparity in text generation by learning a fair model during knowledge distillation. We propose two modifications to the base knowledge distillation based on counterfactual role reversal—modifying teacher probabilities and augmenting the training set. We evaluate gender polarity across professions in open-ended text generated from the resulting distilled and finetuned GPT–2 models and demonstrate a substantial reduction in gender disparity with only a minor compromise in utility. Finally, we observe that language models that reduce gender polarity in language generation do not improve embedding fairness or downstream classification fairness.

pdf bib
Attributing Fair Decisions with Attention Interventions
Ninareh Mehrabi | Umang Gupta | Fred Morstatter | Greg Ver Steeg | Aram Galstyan
Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022)

The widespread use of Artificial Intelligence (AI) in consequential domains, such as health-care and parole decision-making systems, has drawn intense scrutiny on the fairness of these methods. However, ensuring fairness is often insufficient as the rationale for a contentious decision needs to be audited, understood, and defended. We propose that the attention mechanism can be used to ensure fair outcomes while simultaneously providing feature attributions to account for how a decision was made. Toward this goal, we design an attention-based model that can be leveraged as an attribution framework. It can identify features responsible for both performance and fairness of the model through attention interventions and attention weight manipulation. Using this attribution framework, we then design a post-processing bias mitigation strategy and compare it with a suite of baselines. We demonstrate the versatility of our approach by conducting experiments on two distinct data types, tabular and textual.