2024
pdf
abs
When Is a Name Sensitive? Eponyms in Clinical Text and Implications for De-Identification
Thomas Vakili
|
Tyr Hullmann
|
Aron Henriksson
|
Hercules Dalianis
Proceedings of the Workshop on Computational Approaches to Language Data Pseudonymization (CALD-pseudo 2024)
Clinical data, in the form of electronic health records, are rich resources that can be tapped using natural language processing. At the same time, they contain very sensitive information that must be protected. One strategy is to remove or obscure data using automatic de-identification. However, the detection of sensitive data can yield false positives. This is especially true for tokens that are similar in form to sensitive entities, such as eponyms. These names tend to refer to medical procedures or diagnoses rather than specific persons. Previous research has shown that automatic de-identification systems often misclassify eponyms as names, leading to a loss of valuable medical information. In this study, we estimate the prevalence of eponyms in a real Swedish clinical corpus. Furthermore, we demonstrate that modern transformer-based de-identification systems are more accurate in distinguishing between names and eponyms than previous approaches.
2022
pdf
abs
Evaluating Pretraining Strategies for Clinical BERT Models
Anastasios Lamproudis
|
Aron Henriksson
|
Hercules Dalianis
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Research suggests that using generic language models in specialized domains may be sub-optimal due to significant domain differences. As a result, various strategies for developing domain-specific language models have been proposed, including techniques for adapting an existing generic language model to the target domain, e.g. through various forms of vocabulary modifications and continued domain-adaptive pretraining with in-domain data. Here, an empirical investigation is carried out in which various strategies for adapting a generic language model to the clinical domain are compared to pretraining a pure clinical language model. Three clinical language models for Swedish, pretrained for up to ten epochs, are fine-tuned and evaluated on several downstream tasks in the clinical domain. A comparison of the language models’ downstream performance over the training epochs is conducted. The results show that the domain-specific language models outperform a general-domain language model; however, there is little difference in performance of the various clinical language models. However, compared to pretraining a pure clinical language model with only in-domain data, leveraging and adapting an existing general-domain language model requires fewer epochs of pretraining with in-domain data.
pdf
abs
Downstream Task Performance of BERT Models Pre-Trained Using Automatically De-Identified Clinical Data
Thomas Vakili
|
Anastasios Lamproudis
|
Aron Henriksson
|
Hercules Dalianis
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Automatic de-identification is a cost-effective and straightforward way of removing large amounts of personally identifiable information from large and sensitive corpora. However, these systems also introduce errors into datasets due to their imperfect precision. These corruptions of the data may negatively impact the utility of the de-identified dataset. This paper de-identifies a very large clinical corpus in Swedish either by removing entire sentences containing sensitive data or by replacing sensitive words with realistic surrogates. These two datasets are used to perform domain adaptation of a general Swedish BERT model. The impact of the de-identification techniques is assessed by training and evaluating the models using six clinical downstream tasks. The results are then compared to a similar BERT model domain-adapted using an unaltered version of the clinical corpus. The results show that using an automatically de-identified corpus for domain adaptation does not negatively impact downstream performance. We argue that automatic de-identification is an efficient way of reducing the privacy risks of domain-adapted models and that the models created in this paper should be safe to distribute to other academic researchers.
2021
pdf
abs
Developing a Clinical Language Model for Swedish: Continued Pretraining of Generic BERT with In-Domain Data
Anastasios Lamproudis
|
Aron Henriksson
|
Hercules Dalianis
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021)
The use of pretrained language models, fine-tuned to perform a specific downstream task, has become widespread in NLP. Using a generic language model in specialized domains may, however, be sub-optimal due to differences in language use and vocabulary. In this paper, it is investigated whether an existing, generic language model for Swedish can be improved for the clinical domain through continued pretraining with clinical text. The generic and domain-specific language models are fine-tuned and evaluated on three representative clinical NLP tasks: (i) identifying protected health information, (ii) assigning ICD-10 diagnosis codes to discharge summaries, and (iii) sentence-level uncertainty prediction. The results show that continued pretraining on in-domain data leads to improved performance on all three downstream tasks, indicating that there is a potential added value of domain-specific language models for clinical NLP.
2020
pdf
bib
abs
The Impact of De-identification on Downstream Named Entity Recognition in Clinical Text
Hanna Berg
|
Aron Henriksson
|
Hercules Dalianis
Proceedings of the 11th International Workshop on Health Text Mining and Information Analysis
The impact of de-identification on data quality and, in particular, utility for developing models for downstream tasks has been more thoroughly studied for structured data than for unstructured text. While previous studies indicate that text de-identification has a limited impact on models for downstream tasks, it remains unclear what the impact is with various levels and forms of de-identification, in particular concerning the trade-off between precision and recall. In this paper, the impact of de-identification is studied on downstream named entity recognition in Swedish clinical text. The results indicate that de-identification models with moderate to high precision lead to similar downstream performance, while low precision has a substantial negative impact. Furthermore, different strategies for concealing sensitive information affect performance to different degrees, ranging from pseudonymisation having a low impact to the removal of entire sentences with sensitive information having a high impact. This study indicates that it is possible to increase the recall of models for identifying sensitive information without negatively affecting the use of de-identified text data for training models for clinical named entity recognition; however, there is ultimately a trade-off between the level of de-identification and the subsequent utility of the data.
2015
pdf
Expanding a dictionary of marker words for uncertainty and negation using distributional semantics
Alyaa Alfalahi
|
Maria Skeppstedt
|
Rickard Ahlbom
|
Roza Baskalayci
|
Aron Henriksson
|
Lars Asker
|
Carita Paradis
|
Andreas Kerren
Proceedings of the Sixth International Workshop on Health Text Mining and Information Analysis
pdf
Representing Clinical Notes for Adverse Drug Event Detection
Aron Henriksson
Proceedings of the Sixth International Workshop on Health Text Mining and Information Analysis
2014
pdf
bib
Proceedings of the 5th International Workshop on Health Text Mining and Information Analysis (Louhi)
Sumithra Velupillai
|
Martin Duneld
|
Maria Kvist
|
Hercules Dalianis
|
Maria Skeppstedt
|
Aron Henriksson
Proceedings of the 5th International Workshop on Health Text Mining and Information Analysis (Louhi)
pdf
EACL - Expansion of Abbreviations in CLinical text
Lisa Tengstrand
|
Beáta Megyesi
|
Aron Henriksson
|
Martin Duneld
|
Maria Kvist
Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR)
2013
pdf
Corpus-Driven Terminology Development: Populating Swedish SNOMED CT with Synonyms Extracted from Electronic Health Records
Aron Henriksson
|
Maria Skeppstedt
|
Maria Kvist
|
Martin Duneld
|
Mike Conway
Proceedings of the 2013 Workshop on Biomedical Natural Language Processing
2011
pdf
bib
Exploiting Structured Data, Negation Detection and SNOMED CT Terms in a Random Indexing Approach to Clinical Coding
Aron Henriksson
|
Martin Hassel
Proceedings of the Second Workshop on Biomedical Natural Language Processing
pdf
Something Old, Something New – Applying a Pre-trained Parsing Model to Clinical Swedish
Martin Duneld
|
Aron Henriksson
|
Sumithra Velupillai
Proceedings of the 18th Nordic Conference of Computational Linguistics (NODALIDA 2011)
2010
pdf
Levels of certainty in knowledge-intensive corpora: an initial annotation study
Aron Henriksson
|
Sumithra Velupillai
Proceedings of the Workshop on Negation and Speculation in Natural Language Processing