Iterative Multilingual Spectral Attribute Erasure
Shun Shao, Yftah Ziser, Zheng Zhao, Yifu Qiu, Shay B Cohen, Anna Korhonen
Abstract
Multilingual representations embed words with similar meanings to share a common semantic space across languages, creating opportunities to transfer debiasing effects between languages. However, existing methods for debiassing are unable to exploit this opportunity because they operate on individual languages. We present Iterative Multilingual Spectral Attribute Erasure (IMSAE), which identifies and mitigates joint bias subspaces across multiple languages through iterative SVD-based truncation. Evaluating IMSAE across eight languages and five demographic dimensions, we demonstrate its effectiveness in both standard and zero-shot settings, where target language data is unavailable, but linguistically similar languages can be used for debiasing. Our comprehensive experiments across diverse language models (BERT, LLaMA, Mistral) show that IMSAE outperforms traditional monolingual and cross-lingual approaches while maintaining model utility.- Anthology ID:
- 2025.emnlp-main.1488
- Volume:
- Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
- Month:
- November
- Year:
- 2025
- Address:
- Suzhou, China
- Editors:
- Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 29218–29243
- Language:
- URL:
- https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1488/
- DOI:
- Cite (ACL):
- Shun Shao, Yftah Ziser, Zheng Zhao, Yifu Qiu, Shay B Cohen, and Anna Korhonen. 2025. Iterative Multilingual Spectral Attribute Erasure. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 29218–29243, Suzhou, China. Association for Computational Linguistics.
- Cite (Informal):
- Iterative Multilingual Spectral Attribute Erasure (Shao et al., EMNLP 2025)
- PDF:
- https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1488.pdf