Bias A-head? Analyzing Bias in Transformer-Based Language Model Attention Heads

Yi Yang, Hanyu Duan, Ahmed Abbasi, John P. Lalor, Kar Yan Tam


Abstract
Transformer-based pretrained large language models (PLM) such as BERT and GPT have achieved remarkable success in NLP tasks. However, PLMs are prone to encoding stereotypical biases. Although a burgeoning literature has emerged on stereotypical bias mitigation in PLMs, such as work on debiasing gender and racial stereotyping, how such biases manifest and behave internally within PLMs remains largely unknown. Understanding the internal stereotyping mechanisms may allow better assessment of model fairness and guide the development of effective mitigation strategies. In this work, we focus on attention heads, a major component of the Transformer architecture, and propose a bias analysis framework to explore and identify a small set of biased heads that are found to contribute to a PLM’s stereotypical bias. We conduct extensive experiments to validate the existence of these biased heads and to better understand how they behave. We investigate gender and racial bias in the English language in two types of Transformer-based PLMs: the encoder-based BERT model and the decoder-based autoregressive GPT model, LLaMA-2 (7B), and LLaMA-2-Chat (7B). Overall, the results shed light on understanding the bias behavior in pretrained language models.
Anthology ID:
2025.trustnlp-main.18
Volume:
Proceedings of the 5th Workshop on Trustworthy NLP (TrustNLP 2025)
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Trista Cao, Anubrata Das, Tharindu Kumarage, Yixin Wan, Satyapriya Krishna, Ninareh Mehrabi, Jwala Dhamala, Anil Ramakrishna, Aram Galystan, Anoop Kumar, Rahul Gupta, Kai-Wei Chang
Venues:
TrustNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
276–290
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.trustnlp-main.18/
DOI:
Bibkey:
Cite (ACL):
Yi Yang, Hanyu Duan, Ahmed Abbasi, John P. Lalor, and Kar Yan Tam. 2025. Bias A-head? Analyzing Bias in Transformer-Based Language Model Attention Heads. In Proceedings of the 5th Workshop on Trustworthy NLP (TrustNLP 2025), pages 276–290, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Bias A-head? Analyzing Bias in Transformer-Based Language Model Attention Heads (Yang et al., TrustNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.trustnlp-main.18.pdf