Materialized Knowledge Bases from Commonsense Transformers

Tuan-Phong Nguyen, Simon Razniewski


Abstract
Starting from the COMET methodology by Bosselut et al. (2019), generating commonsense knowledge directly from pre-trained language models has recently received significant attention. Surprisingly, up to now no materialized resource of commonsense knowledge generated this way is publicly available. This paper fills this gap, and uses the materialized resources to perform a detailed analysis of the potential of this approach in terms of precision and recall. Furthermore, we identify common problem cases, and outline use cases enabled by materialized resources. We posit that the availability of these resources is important for the advancement of the field, as it enables an off-the-shelf-use of the resulting knowledge, as well as further analyses on its strengths and weaknesses.
Anthology ID:
2022.csrr-1.5
Volume:
Proceedings of the First Workshop on Commonsense Representation and Reasoning (CSRR 2022)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Antoine Bosselut, Xiang Li, Bill Yuchen Lin, Vered Shwartz, Bodhisattwa Prasad Majumder, Yash Kumar Lal, Rachel Rudinger, Xiang Ren, Niket Tandon, Vilém Zouhar
Venue:
CSRR
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
36–42
Language:
URL:
https://aclanthology.org/2022.csrr-1.5
DOI:
10.18653/v1/2022.csrr-1.5
Bibkey:
Cite (ACL):
Tuan-Phong Nguyen and Simon Razniewski. 2022. Materialized Knowledge Bases from Commonsense Transformers. In Proceedings of the First Workshop on Commonsense Representation and Reasoning (CSRR 2022), pages 36–42, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Materialized Knowledge Bases from Commonsense Transformers (Nguyen & Razniewski, CSRR 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2022.csrr-1.5.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-3/2022.csrr-1.5.mp4
Data
ConceptNetWebText