Are Bias Evaluation Methods Biased ?

Lina Berrayana, Sean Rooney, Luis Garcés-Erice, Ioana Giurgiu


Abstract
The creation of benchmarksto evaluate the safety of Large Language Models is one of the key activities within the trusted AI community. These benchmarks allow models to be compared for different aspects of safety such as toxicity, bias, harmful behavior etc. Independent benchmarks adopt different approacheswith distinct data sets and evaluation methods. We investigate how robust such benchmarks are by using different approachesto rank a set of representative models for bias andcompare how similar are the overall rankings. We show that different but widely used bias evaluations methods result in disparate model rankings. We conclude with recommendations for the community in the usage of such benchmarks.
Anthology ID:
2025.gem-1.22
Volume:
Proceedings of the Fourth Workshop on Generation, Evaluation and Metrics (GEM²)
Month:
July
Year:
2025
Address:
Vienna, Austria and virtual meeting
Editors:
Kaustubh Dhole, Miruna Clinciu
Venues:
GEM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
249–261
Language:
URL:
https://preview.aclanthology.org/transition-to-people-yaml/2025.gem-1.22/
DOI:
Bibkey:
Cite (ACL):
Lina Berrayana, Sean Rooney, Luis Garcés-Erice, and Ioana Giurgiu. 2025. Are Bias Evaluation Methods Biased ?. In Proceedings of the Fourth Workshop on Generation, Evaluation and Metrics (GEM²), pages 249–261, Vienna, Austria and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Are Bias Evaluation Methods Biased ? (Berrayana et al., GEM 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/transition-to-people-yaml/2025.gem-1.22.pdf