Answering Open-Domain Questions of Varying Reasoning Steps from Text

Peng Qi, Haejun Lee, Tg Sido, Christopher Manning


Abstract
We develop a unified system to answer directly from text open-domain questions that may require a varying number of retrieval steps. We employ a single multi-task transformer model to perform all the necessary subtasks—retrieving supporting facts, reranking them, and predicting the answer from all retrieved documents—in an iterative fashion. We avoid crucial assumptions of previous work that do not transfer well to real-world settings, including exploiting knowledge of the fixed number of retrieval steps required to answer each question or using structured metadata like knowledge bases or web links that have limited availability. Instead, we design a system that can answer open-domain questions on any text collection without prior knowledge of reasoning complexity. To emulate this setting, we construct a new benchmark, called BeerQA, by combining existing one- and two-step datasets with a new collection of 530 questions that require three Wikipedia pages to answer, unifying Wikipedia corpora versions in the process. We show that our model demonstrates competitive performance on both existing benchmarks and this new benchmark. We make the new benchmark available at https://beerqa.github.io/.
Anthology ID:
2021.emnlp-main.292
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3599–3614
Language:
URL:
https://aclanthology.org/2021.emnlp-main.292
DOI:
10.18653/v1/2021.emnlp-main.292
Bibkey:
Cite (ACL):
Peng Qi, Haejun Lee, Tg Sido, and Christopher Manning. 2021. Answering Open-Domain Questions of Varying Reasoning Steps from Text. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3599–3614, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Answering Open-Domain Questions of Varying Reasoning Steps from Text (Qi et al., EMNLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-1/2021.emnlp-main.292.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-1/2021.emnlp-main.292.mp4
Code
 beerqa/irrr
Data
HotpotQAKILTSQuAD