Psycholinguistic Diagnosis of Language Models’ Commonsense Reasoning

Yan Cong


Abstract
Neural language models have attracted a lot of attention in the past few years. More and more researchers are getting intrigued by how language models encode commonsense, specifically what kind of commonsense they understand, and why they do. This paper analyzed neural language models’ understanding of commonsense pragmatics (i.e., implied meanings) through human behavioral and neurophysiological data. These psycholinguistic tests are designed to draw conclusions based on predictive responses in context, making them very well suited to test word-prediction models such as BERT in natural settings. They can provide the appropriate prompts and tasks to answer questions about linguistic mechanisms underlying predictive responses. This paper adopted psycholinguistic datasets to probe language models’ commonsense reasoning. Findings suggest that GPT-3’s performance was mostly at chance in the psycholinguistic tasks. We also showed that DistillBERT had some understanding of the (implied) intent that’s shared among most people. Such intent is implicitly reflected in the usage of conversational implicatures and presuppositions. Whether or not fine-tuning improved its performance to human-level depends on the type of commonsense reasoning.
Anthology ID:
2022.csrr-1.3
Volume:
Proceedings of the First Workshop on Commonsense Representation and Reasoning (CSRR 2022)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Venue:
CSRR
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17–22
Language:
URL:
https://aclanthology.org/2022.csrr-1.3
DOI:
10.18653/v1/2022.csrr-1.3
Bibkey:
Cite (ACL):
Yan Cong. 2022. Psycholinguistic Diagnosis of Language Models’ Commonsense Reasoning. In Proceedings of the First Workshop on Commonsense Representation and Reasoning (CSRR 2022), pages 17–22, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Psycholinguistic Diagnosis of Language Models’ Commonsense Reasoning (Cong, CSRR 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.csrr-1.3.pdf
Video:
 https://preview.aclanthology.org/ingestion-script-update/2022.csrr-1.3.mp4
Code
 yancong222/pragamtics-commonsense-lms
Data
SuperGLUE