Alejo Nevado-Holgado
2020
An efficient representation of chronological events in medical texts
Andrey Kormilitzin
|
Nemanja Vaci
|
Qiang Liu
|
Hao Ni
|
Goran Nenadic
|
Alejo Nevado-Holgado
Proceedings of the 11th International Workshop on Health Text Mining and Information Analysis
In this work we addressed the problem of capturing sequential information contained in longitudinal electronic health records (EHRs). Clinical notes, which is a particular type of EHR data, are a rich source of information and practitioners often develop clever solutions how to maximise the sequential information contained in free-texts. We proposed a systematic methodology for learning from chronological events available in clinical notes. The proposed methodological path signature framework creates a non-parametric hierarchical representation of sequential events of any type and can be used as features for downstream statistical learning tasks. The methodology was developed and externally validated using the largest in the UK secondary care mental health EHR data on a specific task of predicting survival risk of patients diagnosed with Alzheimer’s disease. The signature-based model was compared to a common survival random forest model. Our results showed a 15.4% increase of risk prediction AUC at the time point of 20 months after the first admission to a specialist memory clinic and the signature method outperformed the baseline mixed-effects model by 13.2 %.
Information Extraction from Swedish Medical Prescriptions with Sig-Transformer Encoder
John Pougué Biyong
|
Bo Wang
|
Terry Lyons
|
Alejo Nevado-Holgado
Proceedings of the 3rd Clinical Natural Language Processing Workshop
Relying on large pretrained language models such as Bidirectional Encoder Representations from Transformers (BERT) for encoding and adding a simple prediction layer has led to impressive performance in many clinical natural language processing (NLP) tasks. In this work, we present a novel extension to the Transformer architecture, by incorporating signature transform with the self-attention model. This architecture is added between embedding and prediction layers. Experiments on a new Swedish prescription data show the proposed architecture to be superior in two of the three information extraction tasks, comparing to baseline models. Finally, we evaluate two different embedding approaches between applying Multilingual BERT and translating the Swedish text to English then encode with a BERT model pretrained on clinical notes.
Search
Co-authors
- Andrey Kormilitzin 1
- Nemanja Vaci 1
- Qiang Liu 1
- Hao Ni 1
- Goran Nenadic 1
- show all...