Detecting Subtle Biases: An Ethical Lens on Underexplored Areas in AI Language Models Biases

Shayan Bali, Farhan Farsi, Mohammad Hosseini, Adel Khorramrouz, Ehsaneddin Asgari


Abstract
Large Language Models (LLMs) are increasingly embedded in the daily lives of individuals across diverse social classes. This widespread integration raises urgent concerns about the subtle, implicit biases these models may contain. In this work, we investigate such biases through the lens of ethical reasoning, analyzing model responses to scenarios in a new dataset we propose comprising 1,016 scenarios, systematically categorized into ethical, unethical, and neutral types. Our study focuses on dimensions that are socially influential but less explored, including (i) residency status, (ii) political ideology, (iii) Fitness Status, (iv) educational attainment, and (v) attitudes toward AI. To assess LLMs’ behavior, we propose a baseline and employ one statistical test and one metric: a permutation test that reveals the presence of bias by comparing the probability distributions of ethical/unethical scenarios with the probability distribution of neutral scenarios on each demographic group, and a tendency measurement that captures the magnitude of bias with respect to the relative difference between probability distribution of ethical and unethical scenarios. Our evaluations of 12 prominent LLMs reveal persistent and nuanced biases across all four attributes, and Llama models exhibited the most pronounced biases. These findings highlight the need for refined ethical benchmarks and bias-mitigation tools in LLMs.
Anthology ID:
2026.eacl-long.345
Volume:
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7352–7379
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.345/
DOI:
Bibkey:
Cite (ACL):
Shayan Bali, Farhan Farsi, Mohammad Hosseini, Adel Khorramrouz, and Ehsaneddin Asgari. 2026. Detecting Subtle Biases: An Ethical Lens on Underexplored Areas in AI Language Models Biases. In Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7352–7379, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Detecting Subtle Biases: An Ethical Lens on Underexplored Areas in AI Language Models Biases (Bali et al., EACL 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.345.pdf