Jiazheng Wang
2025
Enhancing Transformers for Generalizable First-Order Logical Entailment
Tianshi Zheng
|
Jiazheng Wang
|
Zihao Wang
|
Jiaxin Bai
|
Hang Yin
|
Zheye Deng
|
Yangqiu Song
|
Jianxin Li
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Transformers, as the fundamental deep learning architecture, have demonstrated great capability in reasoning. This paper studies the generalizable first-order logical reasoning ability of transformers with their *parameterized* knowledge and how to improve it. Transformers’ capability of first-order reasoning is further captured by whether they can conduct first-order logical entailment, which is quantitatively measured by their performance in answering knowledge graph queries. We establish the connections between (1) two types of distribution shifts studied in out-of-distribution generalization and (2) unseen knowledge and query settings discussed in the task of knowledge graph query answering, which makes it possible to characterize the fine-grained generalizability. Results on our comprehensive dataset showed that transformers **outperform** previous methods designed particularly for this task and provided detailed empirical evidence about the impact of the input query syntax, token embedding, and transformer architectures on the reasoning capability of transformers. Interestingly, our results revealed the mismatch of positional encoding and other design choices of transformer architectures in previous practices. Motivated by this, we propose **TEGA**, a logic-aware architecture that significantly improves the performance in generalizable first-order logical entailment.
Search
Fix author
Co-authors
- Jiaxin Bai 1
- Zheye Deng 1
- Jianxin Li 1
- Yangqiu Song 1
- Zihao Wang 1
- show all...
Venues
- acl1