Profiling Bias in LLMs: Stereotype Dimensions in Contextual Word Embeddings
Carolin M. Schuster, Maria-Alexandra Roman, Shashwat Ghatiwala, Georg Groh
Abstract
Large language models (LLMs) are the foundation of the current successes of artificial intelligence (AI), however, they are unavoidably biased. To effectively communicate the risks and encourage mitigation efforts these models need adequate and intuitive descriptions of their discriminatory properties, appropriate for all audiences of AI. We suggest bias profiles with respect to stereotype dimensions based on dictionaries from social psychology research. Along these dimensions we investigate gender bias in contextual embeddings, across contexts and layers, and generate stereotype profiles for twelve different LLMs, demonstrating their intuition and use case for exposing and visualizing bias.- Anthology ID:
- 2025.nodalida-1.65
- Volume:
- Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)
- Month:
- march
- Year:
- 2025
- Address:
- Tallinn, Estonia
- Editors:
- Richard Johansson, Sara Stymne
- Venue:
- NoDaLiDa
- SIG:
- Publisher:
- University of Tartu Library
- Note:
- Pages:
- 639–650
- Language:
- URL:
- https://preview.aclanthology.org/fix-sig-urls/2025.nodalida-1.65/
- DOI:
- Cite (ACL):
- Carolin M. Schuster, Maria-Alexandra Roman, Shashwat Ghatiwala, and Georg Groh. 2025. Profiling Bias in LLMs: Stereotype Dimensions in Contextual Word Embeddings. In Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025), pages 639–650, Tallinn, Estonia. University of Tartu Library.
- Cite (Informal):
- Profiling Bias in LLMs: Stereotype Dimensions in Contextual Word Embeddings (Schuster et al., NoDaLiDa 2025)
- PDF:
- https://preview.aclanthology.org/fix-sig-urls/2025.nodalida-1.65.pdf