Abstract
Domain knowledge is important for building Natural Language Processing (NLP) systems for low-resource settings, such as in the clinical domain. In this paper, a novel joint training method is introduced for adding knowledge base information from the Unified Medical Language System (UMLS) into language model pre-training for some clinical domain corpus. We show that in three different downstream clinical NLP tasks, our pre-trained language model outperforms the corresponding model with no knowledge base information and other state-of-the-art models. Specifically, in a natural language inference task applied to clinical texts, our knowledge base pre-training approach improves accuracy by up to 1.7%, whereas in clinical name entity recognition tasks, the F1-score improves by up to 1.0%. The pre-trained models are available at https://github.com/noc-lab/clinical-kb-bert.- Anthology ID:
- 2020.coling-main.57
- Volume:
- Proceedings of the 28th International Conference on Computational Linguistics
- Month:
- December
- Year:
- 2020
- Address:
- Barcelona, Spain (Online)
- Venue:
- COLING
- SIG:
- Publisher:
- International Committee on Computational Linguistics
- Note:
- Pages:
- 657–661
- Language:
- URL:
- https://aclanthology.org/2020.coling-main.57
- DOI:
- 10.18653/v1/2020.coling-main.57
- Cite (ACL):
- Boran Hao, Henghui Zhu, and Ioannis Paschalidis. 2020. Enhancing Clinical BERT Embedding using a Biomedical Knowledge Base. In Proceedings of the 28th International Conference on Computational Linguistics, pages 657–661, Barcelona, Spain (Online). International Committee on Computational Linguistics.
- Cite (Informal):
- Enhancing Clinical BERT Embedding using a Biomedical Knowledge Base (Hao et al., COLING 2020)
- PDF:
- https://preview.aclanthology.org/remove-xml-comments/2020.coling-main.57.pdf
- Code
- noc-lab/clinical-kb-bert
- Data
- MIMIC-III