Human-Centered Disability Bias Detection in Large Language Models

Habiba Chakour, Fatiha Sadat


Abstract
To promote a more just and inclusive society, developers and researchers are strongly encouraged to design Language Models (LM) with ethical considerations at the forefront, ensuring that the benefits and opportunities of AI are accessible to all users and communities. Incorporating humans in the loop is one approach recognized for mitigating general AI biases. Consequently, the development of new design guidelines and datasets is essential to help AI systems realize their full potential for the benefit of people with disabilities.This study aims to identify disability-related bias in Large Masked Language Models (MLMs), the Electra. A participatory and collaborative research approach was employed, involving three disability organizations to collect information on deaf and hard-of-hearing individuals. Our initial analysis reveals that the studied MLM is highly sensitive to the various identity references used to describe deaf and hard-of-hearing people.
Anthology ID:
2025.sciprodllm-1.2
Volume:
Proceedings of The First Workshop on Human–LLM Collaboration for Ethical and Responsible Science Production (SciProdLLM)
Month:
December
Year:
2025
Address:
Mumbai, India (Hybrid)
Editors:
Wei Zhao, Jennifer D’Souza, Steffen Eger, Anne Lauscher, Yufang Hou, Nafise Sadat Moosavi, Tristan Miller, Chenghua Lin
Venues:
SciProdLLM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6–18
Language:
URL:
https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.sciprodllm-1.2/
DOI:
Bibkey:
Cite (ACL):
Habiba Chakour and Fatiha Sadat. 2025. Human-Centered Disability Bias Detection in Large Language Models. In Proceedings of The First Workshop on Human–LLM Collaboration for Ethical and Responsible Science Production (SciProdLLM), pages 6–18, Mumbai, India (Hybrid). Association for Computational Linguistics.
Cite (Informal):
Human-Centered Disability Bias Detection in Large Language Models (Chakour & Sadat, SciProdLLM 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.sciprodllm-1.2.pdf