Abstract
Concept graphs are created as universal taxonomies for text understanding in the open-domain knowledge. The nodes in concept graphs include both entities and concepts. The edges are from entities to concepts, showing that an entity is an instance of a concept. In this paper, we propose the task of learning interpretable relationships from open-domain facts to enrich and refine concept graphs. The Bayesian network structures are learned from open-domain facts as the interpretable relationships between relations of facts and concepts of entities. We conduct extensive experiments on public English and Chinese datasets. Compared to the state-of-the-art methods, the learned network structures help improving the identification of concepts for entities based on the relations of entities on both datasets.- Anthology ID:
- 2020.acl-main.717
- Volume:
- Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
- Month:
- July
- Year:
- 2020
- Address:
- Online
- Editors:
- Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 8045–8056
- Language:
- URL:
- https://aclanthology.org/2020.acl-main.717
- DOI:
- 10.18653/v1/2020.acl-main.717
- Cite (ACL):
- Jingyuan Zhang, Mingming Sun, Yue Feng, and Ping Li. 2020. Learning Interpretable Relationships between Entities, Relations and Concepts via Bayesian Structure Learning on Open Domain Facts. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8045–8056, Online. Association for Computational Linguistics.
- Cite (Informal):
- Learning Interpretable Relationships between Entities, Relations and Concepts via Bayesian Structure Learning on Open Domain Facts (Zhang et al., ACL 2020)
- PDF:
- https://preview.aclanthology.org/naacl24-info/2020.acl-main.717.pdf