Improved Logical Reasoning of Language Models via Differentiable Symbolic Programming

Hanlin Zhang, Jiani Huang, Ziyang Li, Mayur Naik, Eric Xing


Abstract
Pre-trained large language models (LMs) struggle to perform logical reasoning reliably despite advances in scale and compositionality. In this work, we tackle this challenge through the lens of symbolic programming. We propose DSR-LM, a Differentiable Symbolic Reasoning framework where pre-trained LMs govern the perception of factual knowledge, and a symbolic module performs deductive reasoning. In contrast to works that rely on hand-crafted logic rules, our differentiable symbolic reasoning framework efficiently learns weighted rules and applies semantic loss to further improve LMs. DSR-LM is scalable, interpretable, and allows easy integration of prior knowledge, thereby supporting extensive symbolic programming to robustly derive a logical conclusion. The results of our experiments suggest that DSR-LM improves the logical reasoning abilities of pre-trained language models, resulting in a significant increase in accuracy of over 20% on deductive reasoning benchmarks. Furthermore, DSR-LM outperforms a variety of competitive baselines when faced with systematic changes in sequence length.
Anthology ID:
2023.findings-acl.191
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3062–3077
Language:
URL:
https://aclanthology.org/2023.findings-acl.191
DOI:
10.18653/v1/2023.findings-acl.191
Bibkey:
Cite (ACL):
Hanlin Zhang, Jiani Huang, Ziyang Li, Mayur Naik, and Eric Xing. 2023. Improved Logical Reasoning of Language Models via Differentiable Symbolic Programming. In Findings of the Association for Computational Linguistics: ACL 2023, pages 3062–3077, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Improved Logical Reasoning of Language Models via Differentiable Symbolic Programming (Zhang et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2023.findings-acl.191.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-3/2023.findings-acl.191.mp4