Abstract
Supervised distributional methods are applied successfully in lexical entailment, but recent work questioned whether these methods actually learn a relation between two words. Specifically, Levy et al. (2015) claimed that linear classifiers learn only separate properties of each word. We suggest a cheap and easy way to boost the performance of these methods by integrating multiplicative features into commonly used representations. We provide an extensive evaluation with different classifiers and evaluation setups, and suggest a suitable evaluation setup for the task, eliminating biases existing in previous ones.- Anthology ID:
- S18-2020
- Volume:
- Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics
- Month:
- June
- Year:
- 2018
- Address:
- New Orleans, Louisiana
- Editors:
- Malvina Nissim, Jonathan Berant, Alessandro Lenci
- Venue:
- *SEM
- SIGs:
- SIGLEX | SIGSEM
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 160–166
- Language:
- URL:
- https://aclanthology.org/S18-2020
- DOI:
- 10.18653/v1/S18-2020
- Cite (ACL):
- Tu Vu and Vered Shwartz. 2018. Integrating Multiplicative Features into Supervised Distributional Methods for Lexical Entailment. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 160–166, New Orleans, Louisiana. Association for Computational Linguistics.
- Cite (Informal):
- Integrating Multiplicative Features into Supervised Distributional Methods for Lexical Entailment (Vu & Shwartz, *SEM 2018)
- PDF:
- https://preview.aclanthology.org/improve-issue-templates/S18-2020.pdf
- Data
- EVALution