Representation and Pre-Activation of Lexical-Semantic Knowledge in Neural Language Models

Steven Derby, Paul Miller, Barry Devereux


Abstract
In this paper, we perform a systematic analysis of how closely the intermediate layers from LSTM and trans former language models correspond to human semantic knowledge. Furthermore, in order to make more meaningful comparisons with theories of human language comprehension in psycholinguistics, we focus on two key stages where the meaning of a particular target word may arise: immediately before the word’s presentation to the model (comparable to forward inferencing), and immediately after the word token has been input into the network. Our results indicate that the transformer models are better at capturing semantic knowledge relating to lexical concepts, both during word prediction and when retention is required.
Anthology ID:
2021.cmcl-1.25
Volume:
Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics
Month:
June
Year:
2021
Address:
Online
Venue:
CMCL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
211–221
Language:
URL:
https://aclanthology.org/2021.cmcl-1.25
DOI:
10.18653/v1/2021.cmcl-1.25
Bibkey:
Cite (ACL):
Steven Derby, Paul Miller, and Barry Devereux. 2021. Representation and Pre-Activation of Lexical-Semantic Knowledge in Neural Language Models. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 211–221, Online. Association for Computational Linguistics.
Cite (Informal):
Representation and Pre-Activation of Lexical-Semantic Knowledge in Neural Language Models (Derby et al., CMCL 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2021.cmcl-1.25.pdf
Data
Billion Word Benchmark