BadFair: Backdoored Fairness Attacks with Group-conditioned Triggers

Jiaqi Xue, Qian Lou, Mengxin Zheng


Abstract
Although many works have been developed to improve the fairness of deep learning models, their resilience against malicious attacks—particularly the growing threat of backdoor attacks—has not been thoroughly explored.Attacking fairness is crucial because compromised models can introduce biased outcomes, undermining trust and amplifying inequalities in sensitive applications like hiring, healthcare, and law enforcement. This highlights the urgent need to understand how fairness mechanisms can be exploited and to develop defenses that ensure both fairness and robustness. We introduce *BadFair*, a novel backdoored fairness attack methodology. BadFair stealthily crafts a model that operates with accuracy and fairness under regular conditions but, when activated by certain triggers, discriminates and produces incorrect results for specific groups. This type of attack is particularly stealthy and dangerous, as it circumvents existing fairness detection methods, maintaining an appearance of fairness in normal use. Our findings reveal that BadFair achieves a more than 85% attack success rate in attacks aimed at target groups on average while only incurring a minimal accuracy loss. Moreover, it consistently exhibits a significant discrimination score, distinguishing between pre-defined target and non-target attacked groups across various datasets and models.
Anthology ID:
2024.findings-emnlp.484
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8257–8270
Language:
URL:
https://preview.aclanthology.org/build-pipeline-with-new-library/2024.findings-emnlp.484/
DOI:
10.18653/v1/2024.findings-emnlp.484
Bibkey:
Cite (ACL):
Jiaqi Xue, Qian Lou, and Mengxin Zheng. 2024. BadFair: Backdoored Fairness Attacks with Group-conditioned Triggers. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 8257–8270, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
BadFair: Backdoored Fairness Attacks with Group-conditioned Triggers (Xue et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/build-pipeline-with-new-library/2024.findings-emnlp.484.pdf