Abstract
Vector space models of words have long been claimed to capture linguistic regularities as simple vector translations, but problems have been raised with this claim. We decompose and empirically analyze the classic arithmetic word analogy test, to motivate two new metrics that address the issues with the standard test, and which distinguish between class-wise offset concentration (similar directions between pairs of words drawn from different broad classes, such as France-London, China-Ottawa,...) and pairing consistency (the existence of a regular transformation between correctly-matched pairs such as France:Paris::China:Beijing). We show that, while the standard analogy test is flawed, several popular word embeddings do nevertheless encode linguistic regularities.- Anthology ID:
- 2020.conll-1.29
- Volume:
- Proceedings of the 24th Conference on Computational Natural Language Learning
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Editors:
- Raquel Fernández, Tal Linzen
- Venue:
- CoNLL
- SIG:
- SIGNLL
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 365–375
- Language:
- URL:
- https://aclanthology.org/2020.conll-1.29
- DOI:
- 10.18653/v1/2020.conll-1.29
- Cite (ACL):
- Louis Fournier, Emmanuel Dupoux, and Ewan Dunbar. 2020. Analogies minus analogy test: measuring regularities in word embeddings. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 365–375, Online. Association for Computational Linguistics.
- Cite (Informal):
- Analogies minus analogy test: measuring regularities in word embeddings (Fournier et al., CoNLL 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-5/2020.conll-1.29.pdf
- Code
- bootphon/measuring-regularities-in-word-embeddings