Modeling Uncertainty and Using Post-fusion as Fallback Improves Retrieval Augmented Generation with LLMs
Ye Liu, Rui Meng, Meghana Moorthy Bhat, Shafiq Joty, Caiming Xiong, Yingbo Zhou, Semih Yavuz
Abstract
The integration of retrieved passages and large language models (LLMs), such as ChatGPTs, has significantly contributed to improving open-domain question answering. However, there is still a lack of exploration regarding the optimal approach for incorporating retrieved passages into the answer generation process. This paper aims to fill this gap by investigating different methods of combining retrieved passages with LLMs to enhance answer generation. We begin by examining the limitations of a commonly-used concatenation approach. Surprisingly, this approach often results in generating “unknown” outputs, even when the correct document is among the top-k retrieved passages. To address this issue, we explore four alternative strategies for integrating the retrieved passages with the LLMs. These strategies include two single-round methods that utilize chain-of-thought reasoning and two multi-round strategies that incorporate feedback loops. Through comprehensive analyses and experiments, we provide insightful observations on how to effectively leverage retrieved passages to enhance the answer generation capability of LLMs. On three open-domain question answering datesets, NQ, TriviaQA and SQuAD, our multi-round approaches outperform traditional concatenation approach, achieving over a 10% improvement in answer EM.- Anthology ID:
- 2024.knowllm-1.7
- Volume:
- Proceedings of the 1st Workshop on Towards Knowledgeable Language Models (KnowLLM 2024)
- Month:
- August
- Year:
- 2024
- Address:
- Bangkok, Thailand
- Editors:
- Sha Li, Manling Li, Michael JQ Zhang, Eunsol Choi, Mor Geva, Peter Hase, Heng Ji
- Venues:
- KnowLLM | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 69–82
- Language:
- URL:
- https://aclanthology.org/2024.knowllm-1.7
- DOI:
- 10.18653/v1/2024.knowllm-1.7
- Cite (ACL):
- Ye Liu, Rui Meng, Meghana Moorthy Bhat, Shafiq Joty, Caiming Xiong, Yingbo Zhou, and Semih Yavuz. 2024. Modeling Uncertainty and Using Post-fusion as Fallback Improves Retrieval Augmented Generation with LLMs. In Proceedings of the 1st Workshop on Towards Knowledgeable Language Models (KnowLLM 2024), pages 69–82, Bangkok, Thailand. Association for Computational Linguistics.
- Cite (Informal):
- Modeling Uncertainty and Using Post-fusion as Fallback Improves Retrieval Augmented Generation with LLMs (Liu et al., KnowLLM-WS 2024)
- PDF:
- https://preview.aclanthology.org/autopr/2024.knowllm-1.7.pdf