Watching the AI Watchdogs: A Fairness and Robustness Analysis of AI Safety Moderation Classifiers

Akshit Achara, Anshuman Chhabra


Abstract
AI Safety Moderation (ASM) classifiers are designed to moderate content on social media platforms and to serve as guardrails that prevent Large Language Models (LLMs) from being fine-tuned on unsafe inputs. Owing to their potential for disparate impact, it is crucial to ensure that these classifiers: (1) do not unfairly classify content belonging to users from minority groups as unsafe compared to those from majority groups and (2) that their behavior remains robust and consistent across similar inputs. In this work, we thus examine the fairness and robustness of four widely-used, closed-source ASM classifiers: OpenAI Moderation API, Perspective API, Google Cloud Natural Language (GCNL) API, and Clarifai API. We assess fairness using metrics such as demographic parity and conditional statistical parity, comparing their performance against ASM models and a fair-only baseline. Additionally, we analyze robustness by testing the classifiers’ sensitivity to small and natural input perturbations. Our findings reveal potential fairness and robustness gaps, highlighting the need to mitigate these issues in future versions of these models.
Anthology ID:
2025.naacl-short.22
Volume:
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
253–264
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-short.22/
DOI:
Bibkey:
Cite (ACL):
Akshit Achara and Anshuman Chhabra. 2025. Watching the AI Watchdogs: A Fairness and Robustness Analysis of AI Safety Moderation Classifiers. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers), pages 253–264, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Watching the AI Watchdogs: A Fairness and Robustness Analysis of AI Safety Moderation Classifiers (Achara & Chhabra, NAACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-short.22.pdf