Which Evaluations Uncover Sense Representations that Actually Make Sense?

Jordan Boyd-Graber, Fenfei Guo, Leah Findlater, Mohit Iyyer


Abstract
Text representations are critical for modern natural language processing. One form of text representation, sense-specific embeddings, reflect a word’s sense in a sentence better than single-prototype word embeddings tied to each type. However, existing sense representations are not uniformly better: although they work well for computer-centric evaluations, they fail for human-centric tasks like inspecting a language’s sense inventory. To expose this discrepancy, we propose a new coherence evaluation for sense embeddings. We also describe a minimal model (Gumbel Attention for Sense Induction) optimized for discovering interpretable sense representations that are more coherent than existing sense embeddings.
Anthology ID:
2020.lrec-1.214
Volume:
Proceedings of the Twelfth Language Resources and Evaluation Conference
Month:
May
Year:
2020
Address:
Marseille, France
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
1727–1738
Language:
English
URL:
https://aclanthology.org/2020.lrec-1.214
DOI:
Bibkey:
Cite (ACL):
Jordan Boyd-Graber, Fenfei Guo, Leah Findlater, and Mohit Iyyer. 2020. Which Evaluations Uncover Sense Representations that Actually Make Sense?. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 1727–1738, Marseille, France. European Language Resources Association.
Cite (Informal):
Which Evaluations Uncover Sense Representations that Actually Make Sense? (Boyd-Graber et al., LREC 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2020.lrec-1.214.pdf