Stan Peshterliev
2022
UniK-QA: Unified Representations of Structured and Unstructured Knowledge for Open-Domain Question Answering
Barlas Oguz
|
Xilun Chen
|
Vladimir Karpukhin
|
Stan Peshterliev
|
Dmytro Okhonko
|
Michael Schlichtkrull
|
Sonal Gupta
|
Yashar Mehdad
|
Scott Yih
Findings of the Association for Computational Linguistics: NAACL 2022
We study open-domain question answering with structured, unstructured and semi-structured knowledge sources, including text, tables, lists and knowledge bases. Departing from prior work, we propose a unifying approach that homogenizes all sources by reducing them to text and applies the retriever-reader model which has so far been limited to text sources only. Our approach greatly improves the results on knowledge-base QA tasks by 11 points, compared to latest graph-based methods. More importantly, we demonstrate that our unified knowledge (UniK-QA) model is a simple and yet effective way to combine heterogeneous sources of knowledge, advancing the state-of-the-art results on two popular question answering benchmarks, NaturalQuestions and WebQuestions, by 3.5 and 2.6 points, respectively.The code of UniK-QA is available at: https://github.com/facebookresearch/UniK-QA.
Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a Sparse One?
Xilun Chen
|
Kushal Lakhotia
|
Barlas Oguz
|
Anchit Gupta
|
Patrick Lewis
|
Stan Peshterliev
|
Yashar Mehdad
|
Sonal Gupta
|
Wen-tau Yih
Findings of the Association for Computational Linguistics: EMNLP 2022
Despite their recent popularity and well-known advantages, dense retrievers still lag behind sparse methods such as BM25 in their ability to reliably match salient phrases and rare entities in the query and to generalize to out-of-domain data. It has been argued that this is an inherent limitation of dense models. We rebut this claim by introducing the Salient Phrase Aware Retriever (SPAR), a dense retriever with the lexical matching capacity of a sparse model. We show that a dense Lexical Model Λ can be trained to imitate a sparse one, and SPAR is built by augmenting a standard dense retriever with Λ. Empirically, SPAR shows superior performance on a range of tasks including five question answering datasets, MS MARCO passage retrieval, as well as the EntityQuestions and BEIR benchmarks for out-of-domain evaluation, exceeding the performance of state-of-the-art dense and sparse retrievers. The code and models of SPAR are available at: https://github.com/facebookresearch/dpr-scale/tree/main/spar
Search
Co-authors
- Barlas Oguz 2
- Xilun Chen 2
- Sonal Gupta 2
- Yashar Mehdad 2
- Vladimir Karpukhin 1
- show all...