One Representation per Word - Does it make Sense for Composition?

Thomas Kober, Julie Weeds, John Wilkie, Jeremy Reffin, David Weir


Abstract
In this paper, we investigate whether an a priori disambiguation of word senses is strictly necessary or whether the meaning of a word in context can be disambiguated through composition alone. We evaluate the performance of off-the-shelf single-vector and multi-sense vector models on a benchmark phrase similarity task and a novel task for word-sense discrimination. We find that single-sense vector models perform as well or better than multi-sense vector models despite arguably less clean elementary representations. Our findings furthermore show that simple composition functions such as pointwise addition are able to recover sense specific information from a single-sense vector model remarkably well.
Anthology ID:
W17-1910
Volume:
Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications
Month:
April
Year:
2017
Address:
Valencia, Spain
Editors:
Jose Camacho-Collados, Mohammad Taher Pilehvar
Venue:
SENSE
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
79–90
Language:
URL:
https://aclanthology.org/W17-1910
DOI:
10.18653/v1/W17-1910
Bibkey:
Cite (ACL):
Thomas Kober, Julie Weeds, John Wilkie, Jeremy Reffin, and David Weir. 2017. One Representation per Word - Does it make Sense for Composition?. In Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications, pages 79–90, Valencia, Spain. Association for Computational Linguistics.
Cite (Informal):
One Representation per Word - Does it make Sense for Composition? (Kober et al., SENSE 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-bitext-workshop/W17-1910.pdf
Code
 tttthomasssss/sense2017