Unsupervised Concept Vector Extraction for Bias Control in LLMs

Hannah Cyberey, Yangfeng Ji, David Evans


Abstract
Large language models (LLMs) are known to perpetuate stereotypes and exhibit biases. Various strategies have been proposed to mitigate these biases, but most work studies biases as a black-box problem without considering how concepts are represented within the model. We adapt techniques from representation engineering to study how the concept of “gender” is represented within LLMs. We introduce a new method that extracts concept representations via probability weighting without labeled data and efficiently selects a steering vector for measuring and manipulating the model’s representation. We develop a projection-based method that enables precise steering of model predictions and demonstrate its effectiveness in mitigating gender bias in LLMs and show that it also generalizes to racial bias.
Anthology ID:
2025.emnlp-main.1439
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
28321–28343
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1439/
DOI:
Bibkey:
Cite (ACL):
Hannah Cyberey, Yangfeng Ji, and David Evans. 2025. Unsupervised Concept Vector Extraction for Bias Control in LLMs. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 28321–28343, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Unsupervised Concept Vector Extraction for Bias Control in LLMs (Cyberey et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1439.pdf
Checklist:
 2025.emnlp-main.1439.checklist.pdf