Abstract
We introduce a conceptually simple and effective method to quantify the similarity between relations in knowledge bases. Specifically, our approach is based on the divergence between the conditional probability distributions over entity pairs. In this paper, these distributions are parameterized by a very simple neural network. Although computing the exact similarity is in-tractable, we provide a sampling-based method to get a good approximation. We empirically show the outputs of our approach significantly correlate with human judgments. By applying our method to various tasks, we also find that (1) our approach could effectively detect redundant relations extracted by open information extraction (Open IE) models, that (2) even the most competitive models for relational classification still make mistakes among very similar relations, and that (3) our approach could be incorporated into negative sampling and softmax classification to alleviate these mistakes.- Anthology ID:
- P19-1278
- Volume:
- Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
- Month:
- July
- Year:
- 2019
- Address:
- Florence, Italy
- Editors:
- Anna Korhonen, David Traum, Lluís Màrquez
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2882–2894
- Language:
- URL:
- https://aclanthology.org/P19-1278
- DOI:
- 10.18653/v1/P19-1278
- Cite (ACL):
- Weize Chen, Hao Zhu, Xu Han, Zhiyuan Liu, and Maosong Sun. 2019. Quantifying Similarity between Relations with Fact Distribution. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2882–2894, Florence, Italy. Association for Computational Linguistics.
- Cite (Informal):
- Quantifying Similarity between Relations with Fact Distribution (Chen et al., ACL 2019)
- PDF:
- https://preview.aclanthology.org/ingest-acl-2023-videos/P19-1278.pdf
- Code
- thunlp/relation-similarity
- Data
- FB15k, TACRED