Lexical Entailment with Hierarchy Representations by Deep Metric Learning

Naomi Sato, Masaru Isonuma, Kimitaka Asatani, Shoya Ishizuka, Aori Shimizu, Ichiro Sakata


Abstract
In this paper, we introduce a novel method for lexical entailment tasks, which detects a hyponym-hypernym relation among words. Existing lexical entailment studies are lacking in generalization performance, as they cannot be applied to words that are not included in the training dataset. Moreover, existing work evaluates the performance by using the dataset that contains words used for training. This study proposes a method that learns a mapping from word embeddings to the hierarchical embeddings in order to predict the hypernymy relations of any input words. To validate the generalization performance, we conduct experiments using a train dataset that does not overlap with the evaluation dataset. As a result, our method achieved state-of-the-art performance and showed robustness for unknown words.
Anthology ID:
2022.findings-emnlp.257
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3517–3522
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.257
DOI:
10.18653/v1/2022.findings-emnlp.257
Bibkey:
Cite (ACL):
Naomi Sato, Masaru Isonuma, Kimitaka Asatani, Shoya Ishizuka, Aori Shimizu, and Ichiro Sakata. 2022. Lexical Entailment with Hierarchy Representations by Deep Metric Learning. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3517–3522, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Lexical Entailment with Hierarchy Representations by Deep Metric Learning (Sato et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2022.findings-emnlp.257.pdf