Stolen Probability: A Structural Weakness of Neural Language Models

David Demeter, Gregory Kimmel, Doug Downey


Abstract
Neural Network Language Models (NNLMs) generate probability distributions by applying a softmax function to a distance metric formed by taking the dot product of a prediction vector with all word vectors in a high-dimensional embedding space. The dot-product distance metric forms part of the inductive bias of NNLMs. Although NNLMs optimize well with this inductive bias, we show that this results in a sub-optimal ordering of the embedding space that structurally impoverishes some words at the expense of others when assigning probability. We present numerical, theoretical and empirical analyses which show that words on the interior of the convex hull in the embedding space have their probability bounded by the probabilities of the words on the hull.
Anthology ID:
2020.acl-main.198
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2191–2197
Language:
URL:
https://aclanthology.org/2020.acl-main.198
DOI:
10.18653/v1/2020.acl-main.198
Bibkey:
Cite (ACL):
David Demeter, Gregory Kimmel, and Doug Downey. 2020. Stolen Probability: A Structural Weakness of Neural Language Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2191–2197, Online. Association for Computational Linguistics.
Cite (Informal):
Stolen Probability: A Structural Weakness of Neural Language Models (Demeter et al., ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/2020.acl-main.198.pdf
Video:
 http://slideslive.com/38929181
Data
WikiText-2