Natural Logic at the Core: Dynamic Rewards for Entailment Tree Generation

Jihao Shi, Xiao Ding, Kai Xiong, Hengwei Zhao, Bing Qin, Ting Liu


Abstract
Entailment trees are essential for enhancing interpretability and transparency in tasks like question answering and natural language understanding. However, existing approaches often lack logical consistency, as they rely on static reward structures or ignore the intricate dependencies within multi-step reasoning. To address these limitations, we propose a method that integrates natural logic principles into reinforcement learning, enabling dynamic reward computation to guide entailment tree generation. Our approach ensures logical consistency across reasoning steps while improving interpretability and generalization. Experiments on EntailmentBank demonstrate significant improvements over state-of-the-art methods, highlighting the effectiveness of natural logic in structured reasoning.
Anthology ID:
2025.findings-acl.893
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17372–17382
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-acl.893/
DOI:
Bibkey:
Cite (ACL):
Jihao Shi, Xiao Ding, Kai Xiong, Hengwei Zhao, Bing Qin, and Ting Liu. 2025. Natural Logic at the Core: Dynamic Rewards for Entailment Tree Generation. In Findings of the Association for Computational Linguistics: ACL 2025, pages 17372–17382, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Natural Logic at the Core: Dynamic Rewards for Entailment Tree Generation (Shi et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-acl.893.pdf