Abstract
Deep neural networks have demonstrated high performance on many natural language processing (NLP) tasks that can be answered directly from text, and have struggled to solve NLP tasks requiring external (e.g., world) knowledge. In this paper, we present OSCR (Ontology-based Semantic Composition Regularization), a method for injecting task-agnostic knowledge from an Ontology or knowledge graph into a neural network during pre-training. We evaluated the performance of BERT pre-trained on Wikipedia with and without OSCR by measuring the performance when fine-tuning on two question answering tasks involving world knowledge and causal reasoning and one requiring domain (healthcare) knowledge and obtained 33.3%, 18.6%, and 4% improved accuracy compared to pre-training BERT without OSCR.- Anthology ID:
- 2020.deelio-1.7
- Volume:
- Proceedings of Deep Learning Inside Out (DeeLIO): The First Workshop on Knowledge Extraction and Integration for Deep Learning Architectures
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Editors:
- Eneko Agirre, Marianna Apidianaki, Ivan Vulić
- Venue:
- DeeLIO
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 56–63
- Language:
- URL:
- https://aclanthology.org/2020.deelio-1.7
- DOI:
- 10.18653/v1/2020.deelio-1.7
- Cite (ACL):
- Travis Goodwin and Dina Demner-Fushman. 2020. Enhancing Question Answering by Injecting Ontological Knowledge through Regularization. In Proceedings of Deep Learning Inside Out (DeeLIO): The First Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 56–63, Online. Association for Computational Linguistics.
- Cite (Informal):
- Enhancing Question Answering by Injecting Ontological Knowledge through Regularization (Goodwin & Demner-Fushman, DeeLIO 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-5/2020.deelio-1.7.pdf
- Data
- COPA, ConceptNet, SQuAD