Making Long-Context Language Models Better Multi-Hop Reasoners

Yanyang Li, Shuo Liang, Michael Lyu, Liwei Wang


Abstract
Recent advancements in long-context modeling have enhanced language models (LMs) for complex tasks across multiple NLP applications. Despite this progress, we find that these models struggle with multi-hop reasoning and exhibit decreased performance in the presence of noisy contexts. In this paper, we introduce Reasoning with Attributions, a novel approach that prompts LMs to supply attributions for each assertion during their reasoning. We validate our approach through experiments on three multi-hop datasets, employing both proprietary and open-source models, and demonstrate its efficacy and resilience. Furthermore, we explore methods to augment reasoning capabilities via fine-tuning and offer an attribution-annotated dataset and a specialized training strategy. Our fine-tuned model achieves competitive performance on multi-hop reasoning benchmarks, closely paralleling proprietary LMs such as ChatGPT and Claude-instant.
Anthology ID:
2024.acl-long.135
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2462–2475
Language:
URL:
https://aclanthology.org/2024.acl-long.135
DOI:
10.18653/v1/2024.acl-long.135
Bibkey:
Cite (ACL):
Yanyang Li, Shuo Liang, Michael Lyu, and Liwei Wang. 2024. Making Long-Context Language Models Better Multi-Hop Reasoners. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2462–2475, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Making Long-Context Language Models Better Multi-Hop Reasoners (Li et al., ACL 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2024.acl-long.135.pdf