Modelling Commonsense Properties Using Pre-Trained Bi-Encoders

Amit Gajbhiye, Luis Espinosa-Anke, Steven Schockaert


Abstract
Grasping the commonsense properties of everyday concepts is an important prerequisite to language understanding. While contextualised language models are reportedly capable of predicting such commonsense properties with human-level accuracy, we argue that such results have been inflated because of the high similarity between training and test concepts. This means that models which capture concept similarity can perform well, even if they do not capture any knowledge of the commonsense properties themselves. In settings where there is no overlap between the properties that are considered during training and testing, we find that the empirical performance of standard language models drops dramatically. To address this, we study the possibility of fine-tuning language models to explicitly model concepts and their properties. In particular, we train separate concept and property encoders on two types of readily available data: extracted hyponym-hypernym pairs and generic sentences. Our experimental results show that the resulting encoders allow us to predict commonsense properties with much higher accuracy than is possible by directly fine-tuning language models. We also present experimental results for the related task of unsupervised hypernym discovery.
Anthology ID:
2022.coling-1.349
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Nicoletta Calzolari, Chu-Ren Huang, Hansaem Kim, James Pustejovsky, Leo Wanner, Key-Sun Choi, Pum-Mo Ryu, Hsin-Hsi Chen, Lucia Donatelli, Heng Ji, Sadao Kurohashi, Patrizia Paggio, Nianwen Xue, Seokhwan Kim, Younggyun Hahm, Zhong He, Tony Kyungil Lee, Enrico Santus, Francis Bond, Seung-Hoon Na
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
3971–3983
Language:
URL:
https://aclanthology.org/2022.coling-1.349
DOI:
Bibkey:
Cite (ACL):
Amit Gajbhiye, Luis Espinosa-Anke, and Steven Schockaert. 2022. Modelling Commonsense Properties Using Pre-Trained Bi-Encoders. In Proceedings of the 29th International Conference on Computational Linguistics, pages 3971–3983, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Modelling Commonsense Properties Using Pre-Trained Bi-Encoders (Gajbhiye et al., COLING 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2022.coling-1.349.pdf
Code
 amitgajbhiye/biencoder_concept_property
Data
ConceptNetGenericsKBSemEval-2018 Task-9