LM-CORE: Language Models with Contextually Relevant External Knowledge

Jivat Kaur, Sumit Bhatia, Milan Aggarwal, Rachit Bansal, Balaji Krishnamurthy


Abstract
Large transformer-based pre-trained language models have achieved impressive performance on a variety of knowledge-intensive tasks and can capture factual knowledge in their parameters. We argue that storing large amounts of knowledge in the model parameters is sub-optimal given the ever-growing amounts of knowledge and resource requirements. We posit that a more efficient alternative is to provide explicit access to contextually relevant structured knowledge to the model and train it to use that knowledge. We present LM-CORE – a general framework to achieve this– that allows decoupling of the language model training from the external knowledge source and allows the latter to be updated without affecting the already trained model. Experimental results show that LM-CORE, having access to external knowledge, achieves significant and robust outperformance over state-of-the-art knowledge-enhanced language models on knowledge probing tasks; can effectively handle knowledge updates; and performs well on two downstream tasks. We also present a thorough error analysis highlighting the successes and failures of LM-CORE. Our code and model checkpoints are publicly available.
Anthology ID:
2022.findings-naacl.57
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
750–769
Language:
URL:
https://aclanthology.org/2022.findings-naacl.57
DOI:
10.18653/v1/2022.findings-naacl.57
Bibkey:
Cite (ACL):
Jivat Kaur, Sumit Bhatia, Milan Aggarwal, Rachit Bansal, and Balaji Krishnamurthy. 2022. LM-CORE: Language Models with Contextually Relevant External Knowledge. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 750–769, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
LM-CORE: Language Models with Contextually Relevant External Knowledge (Kaur et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2022.findings-naacl.57.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-4/2022.findings-naacl.57.mp4
Code
 sumit-research/lmcore
Data
ConceptNetLAMAWebQuestionsWikidata5MYAGO