Abstract
Multilingual models aim for language-invariant representations but still prominently encode language identity. This, along with the scarcity of high-quality parallel retrieval data, limits their performance in retrieval. We introduce LANCER, a multi-task learning framework that improves language-invariant dense retrieval by reducing language-specific signals in the embedding space. Leveraging the notion of linear concept erasure, we design a loss function that penalizes cross-correlation between representations and their language labels. LANCER leverages only English retrieval data and general multilingual corpora, training models to focus on language-invariant retrieval by semantic similarity without necessitating a vast parallel corpus. Experimental results on various datasets show our method consistently improves over baselines, with extensive analyses demonstrating greater language agnosticism.- Anthology ID:
- 2024.emnlp-main.736
- Volume:
- Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
- Month:
- November
- Year:
- 2024
- Address:
- Miami, Florida, USA
- Editors:
- Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 13261–13273
- Language:
- URL:
- https://aclanthology.org/2024.emnlp-main.736
- DOI:
- 10.18653/v1/2024.emnlp-main.736
- Cite (ACL):
- Zhiqi Huang, Puxuan Yu, Shauli Ravfogel, and James Allan. 2024. Language Concept Erasure for Language-invariant Dense Retrieval. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 13261–13273, Miami, Florida, USA. Association for Computational Linguistics.
- Cite (Informal):
- Language Concept Erasure for Language-invariant Dense Retrieval (Huang et al., EMNLP 2024)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2024.emnlp-main.736.pdf