Great~Truths~are ~Always ~Simple: A Rather Simple Knowledge Encoder for Enhancing the Commonsense Reasoning Capacity of Pre-Trained Models

Jinhao Jiang, Kun Zhou, Ji-Rong Wen, Xin Zhao


Abstract
Commonsense reasoning in natural language is a desired ability of artificial intelligent systems. For solving complex commonsense reasoning tasks, a typical solution is to enhance pre-trained language models (PTMs) with a knowledge-aware graph neural network (GNN) encoder that models a commonsense knowledge graph (CSKG).Despite the effectiveness, these approaches are built on heavy architectures, and can’t clearly explain how external knowledge resources improve the reasoning capacity of PTMs. Considering this issue, we conduct a deep empirical analysis, and find that it is indeed relation features from CSKGs (but not node features) that mainly contribute to the performance improvement of PTMs. Based on this finding, we design a simple MLP-based knowledge encoder that utilizes statistical relation paths as features. Extensive experiments conducted on five benchmarks demonstrate the effectiveness of our approach, which also largely reduces the parameters for encoding CSKGs.Our codes and data are publicly available at https://github.com/RUCAIBox/SAFE.
Anthology ID:
2022.findings-naacl.131
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1730–1741
Language:
URL:
https://aclanthology.org/2022.findings-naacl.131
DOI:
10.18653/v1/2022.findings-naacl.131
Bibkey:
Cite (ACL):
Jinhao Jiang, Kun Zhou, Ji-Rong Wen, and Xin Zhao. 2022. Great~Truths~are ~Always ~Simple: A Rather Simple Knowledge Encoder for Enhancing the Commonsense Reasoning Capacity of Pre-Trained Models. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1730–1741, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Great~Truths~are ~Always ~Simple: A Rather Simple Knowledge Encoder for Enhancing the Commonsense Reasoning Capacity of Pre-Trained Models (Jiang et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2022.findings-naacl.131.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-4/2022.findings-naacl.131.mp4
Code
 rucaibox/safe
Data
COPACommonsenseQAConceptNetOpenBookQAPIQA