Does the Correctness of Factual Knowledge Matter for Factual Knowledge-Enhanced Pre-trained Language Models?

Boxi Cao, Qiaoyu Tang, Hongyu Lin, Xianpei Han, Le Sun


Abstract
In recent years, the injection of factual knowledge has been observed to have a significant positive correlation to the downstream task performance of pre-trained language models. However, existing work neither demonstrates that pre-trained models successfully learn the injected factual knowledge nor proves that there is a causal relation between injected factual knowledge and downstream performance improvements. In this paper, we introduce a counterfactual-based analysis framework to explore the causal effects of factual knowledge injection on the performance of language models within pretrain-finetune paradigm. Instead of directly probing the language model or exhaustively enumerating potential confounding factors, we analyze this issue by perturbing the factual knowledge sources at different scales and comparing the performance of pre-trained language models before and after the perturbation. Surprisingly, throughout our experiments, we find that although the knowledge seems to be successfully injected, the correctness of injected knowledge only has a very limited effect on the models’ downstream performance. This finding strongly challenges previous assumptions that the injected factual knowledge is the key for language models to achieve performance improvements on downstream tasks in pretrain-finetune paradigm.
Anthology ID:
2023.emnlp-main.143
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2327–2340
Language:
URL:
https://aclanthology.org/2023.emnlp-main.143
DOI:
10.18653/v1/2023.emnlp-main.143
Bibkey:
Cite (ACL):
Boxi Cao, Qiaoyu Tang, Hongyu Lin, Xianpei Han, and Le Sun. 2023. Does the Correctness of Factual Knowledge Matter for Factual Knowledge-Enhanced Pre-trained Language Models?. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2327–2340, Singapore. Association for Computational Linguistics.
Cite (Informal):
Does the Correctness of Factual Knowledge Matter for Factual Knowledge-Enhanced Pre-trained Language Models? (Cao et al., EMNLP 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2023.emnlp-main.143.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-3/2023.emnlp-main.143.mp4