Learning Patient Representations from Text

Dmitriy Dligach, Timothy Miller


Abstract
Mining electronic health records for patients who satisfy a set of predefined criteria is known in medical informatics as phenotyping. Phenotyping has numerous applications such as outcome prediction, clinical trial recruitment, and retrospective studies. Supervised machine learning for phenotyping typically relies on sparse patient representations such as bag-of-words. We consider an alternative that involves learning patient representations. We develop a neural network model for learning patient representations and show that the learned representations are general enough to obtain state-of-the-art performance on a standard comorbidity detection task.
Anthology ID:
S18-2014
Volume:
Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Editors:
Malvina Nissim, Jonathan Berant, Alessandro Lenci
Venue:
*SEM
SIGs:
SIGLEX | SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
119–123
Language:
URL:
https://aclanthology.org/S18-2014
DOI:
10.18653/v1/S18-2014
Bibkey:
Cite (ACL):
Dmitriy Dligach and Timothy Miller. 2018. Learning Patient Representations from Text. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 119–123, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
Learning Patient Representations from Text (Dligach & Miller, *SEM 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-1/S18-2014.pdf
Code
 dmitriydligach/starsem2018-patient-representations