Investigating How LLMs Propagate Female Stereotypes: Comparing What Models Say via Prompts with What They Represent in Their Embeddings

Andrea Valderrey Nuñez, Jelke Bloem


Abstract
As Large Language Models (LLMs) are increasingly deployed in sensitive domains, concerns about their encoding and reproduction of social bias have intensified. We examine how gender stereotypes are represented in embeddings and expressed in outputs across three models: BERT, base LLaMA-2-7b, and instruction-tuned LLaMA-2-7b-Chat. Focusing on seven female-oriented stereotype categories, we compare embedding-level bias using Directional Embedding Probing with output-level behavior measured via masked token prediction (BERT) and narrative prompt completions (LLaMA models). LLaMA-2-Chat showed the strongest representational–behavioral alignment, with female-aligned scores ranging from 60% to 100% and a significant point-biserial correlation (r = 0.55, p = 0.0008). BERT exhibited weaker alignment (0%–60%; r = 0.39, p = 0.054), while base LLaMA-2 showed intermediate but inconsistent patterns. These findings suggest that instruction tuning is associated with clearer alignment between internal representations and generated outputs, while prompt design plays a critical role in surfacing latent bias. The study contributes to fairness research by emphasizing the need to assess both internal representations and their behavioral expression in LLMs.
Anthology ID:
2026.lrec-main.6
Volume:
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Month:
May
Year:
2026
Address:
Palma de Mallorca, Spain
Editors:
Stelios Piperidis, Núria Bel, Henk van den Heuvel, Nancy Ide, Simon Krek, Antonio Toral
Venue:
LREC
SIG:
Publisher:
ELRA Language Resource Association
Note:
Pages:
77–92
Language:
URL:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.6/
DOI:
Bibkey:
Cite (ACL):
Andrea Valderrey Nuñez and Jelke Bloem. 2026. Investigating How LLMs Propagate Female Stereotypes: Comparing What Models Say via Prompts with What They Represent in Their Embeddings. International Conference on Language Resources and Evaluation, main:77–92.
Cite (Informal):
Investigating How LLMs Propagate Female Stereotypes: Comparing What Models Say via Prompts with What They Represent in Their Embeddings (Valderrey Nuñez & Bloem, LREC 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.6.pdf