Abstract
Transformer models pre-trained with a masked-language-modeling objective (e.g., BERT) encode commonsense knowledge as evidenced by behavioral probes; however, the extent to which this knowledge is acquired by systematic inference over the semantics of the pre-training corpora is an open question. To answer this question, we selectively inject verbalized knowledge into the pre-training minibatches of BERT and evaluate how well the model generalizes to supported inferences after pre-training on the injected knowledge. We find generalization does not improve over the course of pre-training BERT from scratch, suggesting that commonsense knowledge is acquired from surface-level, co-occurrence patterns rather than induced, systematic reasoning.- Anthology ID:
- 2022.naacl-main.337
- Volume:
- Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
- Month:
- July
- Year:
- 2022
- Address:
- Seattle, United States
- Venue:
- NAACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4550–4557
- Language:
- URL:
- https://aclanthology.org/2022.naacl-main.337
- DOI:
- 10.18653/v1/2022.naacl-main.337
- Cite (ACL):
- Ian Porada, Alessandro Sordoni, and Jackie Cheung. 2022. Does Pre-training Induce Systematic Inference? How Masked Language Models Acquire Commonsense Knowledge. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4550–4557, Seattle, United States. Association for Computational Linguistics.
- Cite (Informal):
- Does Pre-training Induce Systematic Inference? How Masked Language Models Acquire Commonsense Knowledge (Porada et al., NAACL 2022)
- PDF:
- https://preview.aclanthology.org/auto-file-uploads/2022.naacl-main.337.pdf