Disentangling Reasoning Capabilities from Language Models with Compositional Reasoning Transformers
Wanjun Zhong, Tingting Ma, Jiahai Wang, Jian Yin, Tiejun Zhao, Chin-Yew Lin, Nan Duan
Abstract
This paper presents ReasonFormer, a unified reasoning framework for mirroring the modular and compositional reasoning process of humans in complex decision-making. Inspired by dual-process theory in cognitive science, the representation module (automatic thinking) and reasoning modules (controlled thinking) are decoupled to capture different levels of cognition. Upon the top of the representation module, the pre-trained reasoning modules are modular and professional in specific and fundamental reasoning skills (e.g., logic, simple QA, etc). To mimic the controlled compositional thinking process, different reasoning modules are dynamically activated and composed in both parallel and cascaded manners to control what reasoning skills are activated and how deep the reasoning process will be reached to solve the current problems. The unified reasoning framework solves multiple tasks with a single model, and is trained and inferred in an end-to-end manner. Evaluated on 11 datasets requiring different reasoning skills and complexity, ReasonFormer demonstrates substantial performance boosts, revealing the compositional reasoning ability. Few-shot experiments exhibit better generalization ability by learning to compose pre-trained skills for new tasks with limited data, and decoupling the representation module and the reasoning modules. Further analysis shows the modularity of reasoning modules as different tasks activate distinct reasoning skills at different reasoning depths.- Anthology ID:
- 2023.findings-acl.480
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2023
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Editors:
- Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 7587–7600
- Language:
- URL:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2023.findings-acl.480/
- DOI:
- 10.18653/v1/2023.findings-acl.480
- Cite (ACL):
- Wanjun Zhong, Tingting Ma, Jiahai Wang, Jian Yin, Tiejun Zhao, Chin-Yew Lin, and Nan Duan. 2023. Disentangling Reasoning Capabilities from Language Models with Compositional Reasoning Transformers. In Findings of the Association for Computational Linguistics: ACL 2023, pages 7587–7600, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Disentangling Reasoning Capabilities from Language Models with Compositional Reasoning Transformers (Zhong et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2023.findings-acl.480.pdf