A Hybrid Neural Network Model for Commonsense Reasoning

Pengcheng He, Xiaodong Liu, Weizhu Chen, Jianfeng Gao


Abstract
This paper proposes a hybrid neural network(HNN) model for commonsense reasoning. An HNN consists of two component models, a masked language model and a semantic similarity model, which share a BERTbased contextual encoder but use different model-specific input and output layers. HNN obtains new state-of-the-art results on three classic commonsense reasoning tasks, pushing the WNLI benchmark to 89%, the Winograd Schema Challenge (WSC) benchmark to 75.1%, and the PDP60 benchmark to 90.0%. An ablation study shows that language models and semantic similarity models are complementary approaches to commonsense reasoning, and HNN effectively combines the strengths of both. The code and pre-trained models will be publicly available at https: //github.com/namisan/mt-dnn.
Anthology ID:
D19-6002
Volume:
Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing
Month:
November
Year:
2019
Address:
Hong Kong, China
Venue:
WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13–21
Language:
URL:
https://aclanthology.org/D19-6002
DOI:
10.18653/v1/D19-6002
Bibkey:
Cite (ACL):
Pengcheng He, Xiaodong Liu, Weizhu Chen, and Jianfeng Gao. 2019. A Hybrid Neural Network Model for Commonsense Reasoning. In Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing, pages 13–21, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
A Hybrid Neural Network Model for Commonsense Reasoning (He et al., 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/D19-6002.pdf
Code
 namisan/mt-dnn +  additional community code
Data
GLUEReCoRDWSC