FanOutQA: A Multi-Hop, Multi-Document Question Answering Benchmark for Large Language Models

Andrew Zhu, Alyssa Hwang, Liam Dugan, Chris Callison-Burch


Abstract
One type of question that is commonly found in day-to-day scenarios is “fan-out” questions, complex multi-hop, multi-document reasoning questions that require finding information about a large number of entities. However, there exist few resources to evaluate this type of question-answering capability among large language models. To evaluate complex reasoning in LLMs more fully, we present FanOutQA, a high-quality dataset of fan-out question-answer pairs and human-annotated decompositions with English Wikipedia as the knowledge base. We formulate three benchmark settings across our dataset and benchmark 7 LLMs, including GPT-4, LLaMA 2, Claude-2.1, and Mixtral-8x7B, finding that contemporary models still have room to improve reasoning over inter-document dependencies in a long context. We provide our dataset, along with open-source tools to run models to encourage evaluation.
Anthology ID:
2024.acl-short.2
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
18–37
Language:
URL:
https://aclanthology.org/2024.acl-short.2
DOI:
10.18653/v1/2024.acl-short.2
Bibkey:
Cite (ACL):
Andrew Zhu, Alyssa Hwang, Liam Dugan, and Chris Callison-Burch. 2024. FanOutQA: A Multi-Hop, Multi-Document Question Answering Benchmark for Large Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 18–37, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
FanOutQA: A Multi-Hop, Multi-Document Question Answering Benchmark for Large Language Models (Zhu et al., ACL 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2024.acl-short.2.pdf