Learning to Understand Phrases by Embedding the Dictionary

Felix Hill, Kyunghyun Cho, Anna Korhonen, Yoshua Bengio


Abstract
Distributional models that learn rich semantic word representations are a success story of recent NLP research. However, developing models that learn useful representations of phrases and sentences has proved far harder. We propose using the definitions found in everyday dictionaries as a means of bridging this gap between lexical and phrasal semantics. Neural language embedding models can be effectively trained to map dictionary definitions (phrases) to (lexical) representations of the words defined by those definitions. We present two applications of these architectures: reverse dictionaries that return the name of a concept given a definition or description and general-knowledge crossword question answerers. On both tasks, neural language embedding models trained on definitions from a handful of freely-available lexical resources perform as well or better than existing commercial systems that rely on significant task-specific engineering. The results highlight the effectiveness of both neural embedding architectures and definition-based training for developing models that understand phrases and sentences.
Anthology ID:
Q16-1002
Volume:
Transactions of the Association for Computational Linguistics, Volume 4
Month:
Year:
2016
Address:
Cambridge, MA
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
17–30
Language:
URL:
https://aclanthology.org/Q16-1002
DOI:
10.1162/tacl_a_00080
Bibkey:
Cite (ACL):
Felix Hill, Kyunghyun Cho, Anna Korhonen, and Yoshua Bengio. 2016. Learning to Understand Phrases by Embedding the Dictionary. Transactions of the Association for Computational Linguistics, 4:17–30.
Cite (Informal):
Learning to Understand Phrases by Embedding the Dictionary (Hill et al., TACL 2016)
Copy Citation:
PDF:
https://preview.aclanthology.org/remove-xml-comments/Q16-1002.pdf
Code
 additional community code