Adapting Pre-trained Word Embeddings For Use In Medical Coding

Kevin Patel, Divya Patel, Mansi Golakiya, Pushpak Bhattacharyya, Nilesh Birari


Abstract
Word embeddings are a crucial component in modern NLP. Pre-trained embeddings released by different groups have been a major reason for their popularity. However, they are trained on generic corpora, which limits their direct use for domain specific tasks. In this paper, we propose a method to add task specific information to pre-trained word embeddings. Such information can improve their utility. We add information from medical coding data, as well as the first level from the hierarchy of ICD-10 medical code set to different pre-trained word embeddings. We adapt CBOW algorithm from the word2vec package for our purpose. We evaluated our approach on five different pre-trained word embeddings. Both the original word embeddings, and their modified versions (the ones with added information) were used for automated review of medical coding. The modified word embeddings give an improvement in f-score by 1% on the 5-fold evaluation on a private medical claims dataset. Our results show that adding extra information is possible and beneficial for the task at hand.
Anthology ID:
W17-2338
Volume:
BioNLP 2017
Month:
August
Year:
2017
Address:
Vancouver, Canada,
Editors:
Kevin Bretonnel Cohen, Dina Demner-Fushman, Sophia Ananiadou, Junichi Tsujii
Venue:
BioNLP
SIG:
SIGBIOMED
Publisher:
Association for Computational Linguistics
Note:
Pages:
302–306
Language:
URL:
https://aclanthology.org/W17-2338
DOI:
10.18653/v1/W17-2338
Bibkey:
Cite (ACL):
Kevin Patel, Divya Patel, Mansi Golakiya, Pushpak Bhattacharyya, and Nilesh Birari. 2017. Adapting Pre-trained Word Embeddings For Use In Medical Coding. In BioNLP 2017, pages 302–306, Vancouver, Canada,. Association for Computational Linguistics.
Cite (Informal):
Adapting Pre-trained Word Embeddings For Use In Medical Coding (Patel et al., BioNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-dup-bibkey/W17-2338.pdf