Ke Zhan
2022
Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering
Jiawei Zhou
|
Xiaoguang Li
|
Lifeng Shang
|
Lan Luo
|
Ke Zhan
|
Enrui Hu
|
Xinyu Zhang
|
Hao Jiang
|
Zhao Cao
|
Fan Yu
|
Xin Jiang
|
Qun Liu
|
Lei Chen
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
To alleviate the data scarcity problem in training question answering systems, recent works propose additional intermediate pre-training for dense passage retrieval (DPR). However, there still remains a large discrepancy between the provided upstream signals and the downstream question-passage relevance, which leads to less improvement. To bridge this gap, we propose the HyperLink-induced Pre-training (HLP), a method to pre-train the dense retriever with the text relevance induced by hyperlink-based topology within Web documents. We demonstrate that the hyperlink-based structures of dual-link and co-mention can provide effective relevance signals for large-scale pre-training that better facilitate downstream passage retrieval. We investigate the effectiveness of our approach across a wide range of open-domain QA datasets under zero-shot, few-shot, multi-hop, and out-of-domain scenarios. The experiments show our HLP outperforms the BM25 by up to 7 points as well as other pre-training methods by more than 10 points in terms of top-20 retrieval accuracy under the zero-shot scenario. Furthermore, HLP significantly outperforms other pre-training methods under the other scenarios.
Search
Co-authors
- Jiawei Zhou 1
- Xiaoguang Li 1
- Lifeng Shang 1
- Lan Luo 1
- Enrui Hu 1
- show all...
Venues
- acl1