Distilling Hypernymy Relations from Language Models: On the Effectiveness of Zero-Shot Taxonomy Induction

Devansh Jain, Luis Espinosa Anke


Abstract
In this paper, we analyze zero-shot taxonomy learning methods which are based on distilling knowledge from language models via prompting and sentence scoring. We show that, despite their simplicity, these methods outperform some supervised strategies and are competitive with the current state-of-the-art under adequate conditions. We also show that statistical and linguistic properties of prompts dictate downstream performance.
Anthology ID:
2022.starsem-1.13
Volume:
Proceedings of the 11th Joint Conference on Lexical and Computational Semantics
Month:
July
Year:
2022
Address:
Seattle, Washington
Venue:
*SEM
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
151–156
Language:
URL:
https://aclanthology.org/2022.starsem-1.13
DOI:
10.18653/v1/2022.starsem-1.13
Bibkey:
Cite (ACL):
Devansh Jain and Luis Espinosa Anke. 2022. Distilling Hypernymy Relations from Language Models: On the Effectiveness of Zero-Shot Taxonomy Induction. In Proceedings of the 11th Joint Conference on Lexical and Computational Semantics, pages 151–156, Seattle, Washington. Association for Computational Linguistics.
Cite (Informal):
Distilling Hypernymy Relations from Language Models: On the Effectiveness of Zero-Shot Taxonomy Induction (Jain & Espinosa Anke, *SEM 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2022.starsem-1.13.pdf