Houxing Ren
2022
Lexicon-Enhanced Self-Supervised Training for Multilingual Dense Retrieval
Houxing Ren
|
Linjun Shou
|
Jian Pei
|
Ning Wu
|
Ming Gong
|
Daxin Jiang
Findings of the Association for Computational Linguistics: EMNLP 2022
Recent multilingual pre-trained models have shown better performance in various multilingual tasks. However, these models perform poorly on multilingual retrieval tasks due to lacking multilingual training data. In this paper, we propose to mine and generate self-supervised training data based on a large-scale unlabeled corpus. We carefully design a mining method which combines the sparse and dense models to mine the relevance of unlabeled queries and passages. And we introduce a query generator to generate more queries in target languages for unlabeled passages. Through extensive experiments on Mr. TYDI dataset and an industrial dataset from a commercial search engine, we demonstrate that our method performs better than baselines based on various pre-trained multilingual models. Our method even achieves on-par performance with the supervised method on the latter dataset.
Empowering Dual-Encoder with Query Generator for Cross-Lingual Dense Retrieval
Houxing Ren
|
Linjun Shou
|
Ning Wu
|
Ming Gong
|
Daxin Jiang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
In monolingual dense retrieval, lots of works focus on how to distill knowledge from cross-encoder re-ranker to dual-encoder retriever and these methods achieve better performance due to the effectiveness of cross-encoder re-ranker. However, we find that the performance of the cross-encoder re-ranker is heavily influenced by the number of training samples and the quality of negative samples, which is hard to obtain in the cross-lingual setting. In this paper, we propose to use a query generator as the teacher in the cross-lingual setting, which is less dependent on enough training samples and high-quality negative samples. In addition to traditional knowledge distillation, we further propose a novel enhancement method, which uses the query generator to help the dual-encoder align queries from different languages, but does not need any additional parallel sentences. The experimental results show that our method outperforms the state-of-the-art methods on two benchmark datasets.
Search