Word Representations Concentrate and This is Good News!

Romain Couillet, Yagmur Gizem Cinar, Eric Gaussier, Muhammad Imran


Abstract
This article establishes that, unlike the legacy tf*idf representation, recent natural language representations (word embedding vectors) tend to exhibit a so-called concentration of measure phenomenon, in the sense that, as the representation size p and database size n are both large, their behavior is similar to that of large dimensional Gaussian random vectors. This phenomenon may have important consequences as machine learning algorithms for natural language data could be amenable to improvement, thereby providing new theoretical insights into the field of natural language processing.
Anthology ID:
2020.conll-1.25
Volume:
Proceedings of the 24th Conference on Computational Natural Language Learning
Month:
November
Year:
2020
Address:
Online
Editors:
Raquel Fernández, Tal Linzen
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
325–334
Language:
URL:
https://aclanthology.org/2020.conll-1.25
DOI:
10.18653/v1/2020.conll-1.25
Bibkey:
Cite (ACL):
Romain Couillet, Yagmur Gizem Cinar, Eric Gaussier, and Muhammad Imran. 2020. Word Representations Concentrate and This is Good News!. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 325–334, Online. Association for Computational Linguistics.
Cite (Informal):
Word Representations Concentrate and This is Good News! (Couillet et al., CoNLL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2020.conll-1.25.pdf
Code
 ygcinar/nlp-concentration