Can We Guide a Multi-Hop Reasoning Language Model to Incrementally Learn at Each Single-Hop?

Jesus Lovon-Melgarejo, Jose G. Moreno, Romaric Besançon, Olivier Ferret, Lynda Tamine


Abstract
Despite the success of state-of-the-art pre-trained language models (PLMs) on a series of multi-hop reasoning tasks, they still suffer from their limited abilities to transfer learning from simple to complex tasks and vice-versa. We argue that one step forward to overcome this limitation is to better understand the behavioral trend of PLMs at each hop over the inference chain. Our critical underlying idea is to mimic human-style reasoning: we envision the multi-hop reasoning process as a sequence of explicit single-hop reasoning steps. To endow PLMs with incremental reasoning skills, we propose a set of inference strategies on relevant facts and distractors allowing us to build automatically generated training datasets. Using the SHINRA and ConceptNet resources jointly, we empirically show the effectiveness of our proposal on multiple-choice question answering and reading comprehension, with a relative improvement in terms of accuracy of 68.4% and 16.0% w.r.t. classic PLMs, respectively.
Anthology ID:
2022.coling-1.125
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, Seung-Hoon Na
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
1455–1466
Language:
URL:
https://aclanthology.org/2022.coling-1.125
DOI:
Bibkey:
Cite (ACL):
Jesus Lovon-Melgarejo, Jose G. Moreno, Romaric Besançon, Olivier Ferret, and Lynda Tamine. 2022. Can We Guide a Multi-Hop Reasoning Language Model to Incrementally Learn at Each Single-Hop?. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1455–1466, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Can We Guide a Multi-Hop Reasoning Language Model to Incrementally Learn at Each Single-Hop? (Lovon-Melgarejo et al., COLING 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2022.coling-1.125.pdf
Code
 jeslev/incremental_reasoning
Data
ConceptNetRACE