Negative Sampling Improves Hypernymy Extraction Based on Projection Learning
Dmitry Ustalov, Nikolay Arefyev, Chris Biemann, Alexander Panchenko
Abstract
We present a new approach to extraction of hypernyms based on projection learning and word embeddings. In contrast to classification-based approaches, projection-based methods require no candidate hyponym-hypernym pairs. While it is natural to use both positive and negative training examples in supervised relation extraction, the impact of positive examples on hypernym prediction was not studied so far. In this paper, we show that explicit negative examples used for regularization of the model significantly improve performance compared to the state-of-the-art approach of Fu et al. (2014) on three datasets from different languages.- Anthology ID:
- E17-2087
- Volume:
- Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers
- Month:
- April
- Year:
- 2017
- Address:
- Valencia, Spain
- Editors:
- Mirella Lapata, Phil Blunsom, Alexander Koller
- Venue:
- EACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 543–550
- Language:
- URL:
- https://aclanthology.org/E17-2087
- DOI:
- Cite (ACL):
- Dmitry Ustalov, Nikolay Arefyev, Chris Biemann, and Alexander Panchenko. 2017. Negative Sampling Improves Hypernymy Extraction Based on Projection Learning. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 543–550, Valencia, Spain. Association for Computational Linguistics.
- Cite (Informal):
- Negative Sampling Improves Hypernymy Extraction Based on Projection Learning (Ustalov et al., EACL 2017)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-3/E17-2087.pdf
- Code
- nlpub/projlearn
- Data
- EVALution