Felix Gers
Also published as: Felix Alexander Gers
2026
DeepICD-R1: Medical Reasoning through Hierarchical Rewards and Unsupervised Distillation
Tom Röhr | Thomas Maximilian Josef Steffek | Roman Teucher | Keno Bressem | Alexei Figueroa | Paul Grundmann | Peter Troeger | Felix Alexander Gers | Alexander Löser
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Tom Röhr | Thomas Maximilian Josef Steffek | Roman Teucher | Keno Bressem | Alexei Figueroa | Paul Grundmann | Peter Troeger | Felix Alexander Gers | Alexander Löser
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Large language models (LLMs) show strong reasoning abilities, but full retraining for the medical domain is often infeasible because of lacking data or compute resources. We present DeepICD-R1, a framework for efficient medical reasoning fine-tuning that unites hierarchical rewards with distilled supervision. We reformulate ICD-10-CM prediction as a reinforcement learning problem and design a hierarchical outcome-based reward that reflects the ICD code structure across chapter, category, and full-code levels. In parallel, we publish a large-scale distilled dataset of over 90k reasoning traces derived from MIMIC-IV admission notes, integrating clinical validation and official coding guidelines. Fine-tuning smaller instruction-tuned LLMs with this data and GRPO reinforcement yields consistent gains in diagnostic accuracy and reasoning coherence. Extensive ablations confirm that hierarchical supervision and verifiable outcome rewards enable competitive, domain-specialized reasoning models without additional pretraining, providing a reproducible foundation for clinical NLP research. Keywords: Clinical NLP, Large Reasoning Model, GRPO, Supervised Fine-Tuning
CliniBench: A Clinical Outcome Prediction Benchmark for Generative and Encoder-Based Language Models
Paul Grundmann | Jan Frick | Dennis Fast | Thomas Steffek | Felix Gers | Wolfgang Nejdl | Alexander Löser
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Paul Grundmann | Jan Frick | Dennis Fast | Thomas Steffek | Felix Gers | Wolfgang Nejdl | Alexander Löser
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
With their growing capabilities, generative large language models (LLMs) are being increasingly investigated for complex medical tasks.However, their effectiveness in real-world clinical applications remains underexplored. To address this, we present CliniBench, the first benchmark that enables comparability of well-studied encoder-based classifiers and generative LLMs for discharge diagnosis prediction from admission notes in the MIMIC-IV dataset. Our extensive study compares 12 generative LLMs and 3 encoder-based classifiers and demonstrates that encoder-based classifiers consistently outperform generative models in diagnosis prediction. We assess several retrieval augmentation strategies for in-context learning from similar patients and find that they provide notable performance improvements for generative LLMs.
2024
DDxGym: Online Transformer Policies in a Knowledge Graph Based Natural Language Environment
Benjamin Winter | Alexei Gustavo Figueroa Rosero | Alexander Loeser | Felix Alexander Gers | Nancy Katerina Figueroa Rosero | Ralf Krestel
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Benjamin Winter | Alexei Gustavo Figueroa Rosero | Alexander Loeser | Felix Alexander Gers | Nancy Katerina Figueroa Rosero | Ralf Krestel
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Differential diagnosis (DDx) is vital for physicians and challenging due to the existence of numerous diseases and their complex symptoms. Model training for this task is generally hindered by limited data access due to privacy concerns. To address this, we present DDxGym, a specialized OpenAI Gym environment for clinical differential diagnosis. DDxGym formulates DDx as a natural-language-based reinforcement learning (RL) problem, where agents emulate medical professionals, selecting examinations and treatments for patients with randomly sampled diseases. This RL environment utilizes data labeled from online resources, evaluated by medical professionals for accuracy. Transformers, while effective for encoding text in DDxGym, are unstable in online RL. For that reason we propose a novel training method using an auxiliary masked language modeling objective for policy optimization, resulting in model stabilization and significant performance improvement over strong baselines. Following this approach, our agent effectively navigates large action spaces and identifies universally applicable actions. All data, environment details, and implementation, including experiment reproduction code, are made publicly available.
2022
Attention Networks for Augmenting Clinical Text with Support Sets for Diagnosis Prediction
Paul Grundmann | Tom Oberhauser | Felix Gers | Alexander Löser
Proceedings of the 29th International Conference on Computational Linguistics
Paul Grundmann | Tom Oberhauser | Felix Gers | Alexander Löser
Proceedings of the 29th International Conference on Computational Linguistics
Diagnosis prediction on admission notes is a core clinical task. However, these notes may incompletely describe the patient. Also, clinical language models may suffer from idiosyncratic language or imbalanced vocabulary for describing diseases or symptoms. We tackle the task of diagnosis prediction, which consists of predicting future patient diagnoses from clinical texts at the time of admission. We improve the performance on this task by introducing an additional signal from support sets of diagnostic codes from prior admissions or as they emerge during differential diagnosis. To enhance the robustness of diagnosis prediction methods, we propose to augment clinical text with potentially complementary set data from diagnosis codes from previous patient visits or from codes that emerge from the current admission as they become available through diagnostics. We discuss novel attention network architectures and augmentation strategies to solve this problem. Our experiments reveal that support sets improve the performance drastically to predict less common diagnosis codes. Our approach clearly outperforms the previous state-of-the-art PubMedBERT baseline by up 3% points. Furthermore, we find that support sets drastically improve the performance for pregnancy- and gynecology-related diagnoses up to 32.9% points compared to the baseline.
This Patient Looks Like That Patient: Prototypical Networks for Interpretable Diagnosis Prediction from Clinical Text
Betty van Aken | Jens-Michalis Papaioannou | Marcel Naik | Georgios Eleftheriadis | Wolfgang Nejdl | Felix Gers | Alexander Loeser
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Betty van Aken | Jens-Michalis Papaioannou | Marcel Naik | Georgios Eleftheriadis | Wolfgang Nejdl | Felix Gers | Alexander Loeser
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
The use of deep neural models for diagnosis prediction from clinical text has shown promising results. However, in clinical practice such models must not only be accurate, but provide doctors with interpretable and helpful results. We introduce ProtoPatient, a novel method based on prototypical networks and label-wise attention with both of these abilities. ProtoPatient makes predictions based on parts of the text that are similar to prototypical patients—providing justifications that doctors understand. We evaluate the model on two publicly available clinical datasets and show that it outperforms existing baselines. Quantitative and qualitative evaluations with medical doctors further demonstrate that the model provides valuable explanations for clinical decision support.
KIMERA: Injecting Domain Knowledge into Vacant Transformer Heads
Benjamin Winter | Alexei Figueroa Rosero | Alexander Löser | Felix Alexander Gers | Amy Siu
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Benjamin Winter | Alexei Figueroa Rosero | Alexander Löser | Felix Alexander Gers | Amy Siu
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Training transformer language models requires vast amounts of text and computational resources. This drastically limits the usage of these models in niche domains for which they are not optimized, or where domain-specific training data is scarce. We focus here on the clinical domain because of its limited access to training data in common tasks, while structured ontological data is often readily available. Recent observations in model compression of transformer models show optimization potential in improving the representation capacity of attention heads. We propose KIMERA (Knowledge Injection via Mask Enforced Retraining of Attention) for detecting, retraining and instilling attention heads with complementary structured domain knowledge. Our novel multi-task training scheme effectively identifies and targets individual attention heads that are least useful for a given downstream task and optimizes their representation with information from structured data. KIMERA generalizes well, thereby building the basis for an efficient fine-tuning. KIMERA achieves significant performance boosts on seven datasets in the medical domain in Information Retrieval and Clinical Outcome Prediction settings. We apply KIMERA to BERT-base to evaluate the extent of the domain transfer and also improve on the already strong results of BioBERT in the clinical domain.
Cross-Lingual Knowledge Transfer for Clinical Phenotyping
Jens-Michalis Papaioannou | Paul Grundmann | Betty van Aken | Athanasios Samaras | Ilias Kyparissidis | George Giannakoulas | Felix Gers | Alexander Loeser
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Jens-Michalis Papaioannou | Paul Grundmann | Betty van Aken | Athanasios Samaras | Ilias Kyparissidis | George Giannakoulas | Felix Gers | Alexander Loeser
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Clinical phenotyping enables the automatic extraction of clinical conditions from patient records, which can be beneficial to doctors and clinics worldwide. However, current state-of-the-art models are mostly applicable to clinical notes written in English. We therefore investigate cross-lingual knowledge transfer strategies to execute this task for clinics that do not use the English language and have a small amount of in-domain data available. Our results reveal two strategies that outperform the state-of-the-art: Translation-based methods in combination with domain-specific encoders and cross-lingual encoders plus adapters. We find that these strategies perform especially well for classifying rare phenotypes and we advise on which method to prefer in which situation. Our results show that using multilingual data overall improves clinical phenotyping models and can compensate for data sparseness.
2021
Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration
Betty van Aken | Jens-Michalis Papaioannou | Manuel Mayrdorfer | Klemens Budde | Felix Gers | Alexander Loeser
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Betty van Aken | Jens-Michalis Papaioannou | Manuel Mayrdorfer | Klemens Budde | Felix Gers | Alexander Loeser
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Outcome prediction from clinical text can prevent doctors from overlooking possible risks and help hospitals to plan capacities. We simulate patients at admission time, when decision support can be especially valuable, and contribute a novel *admission to discharge* task with four common outcome prediction targets: Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction. The ideal system should infer outcomes based on symptoms, pre-conditions and risk factors of a patient. We evaluate the effectiveness of language models to handle this scenario and propose *clinical outcome pre-training* to integrate knowledge about patient outcomes from multiple public sources. We further present a simple method to incorporate ICD code hierarchy into the models. We show that our approach improves performance on the outcome tasks against several baselines. A detailed analysis reveals further strengths of the model, including transferability, but also weaknesses such as handling of vital values and inconsistencies in the underlying data.
2020
Is Language Modeling Enough? Evaluating Effective Embedding Combinations
Rudolf Schneider | Tom Oberhauser | Paul Grundmann | Felix Alexander Gers | Alexander Loeser | Steffen Staab
Proceedings of the Twelfth Language Resources and Evaluation Conference
Rudolf Schneider | Tom Oberhauser | Paul Grundmann | Felix Alexander Gers | Alexander Loeser | Steffen Staab
Proceedings of the Twelfth Language Resources and Evaluation Conference
Universal embeddings, such as BERT or ELMo, are useful for a broad set of natural language processing tasks like text classification or sentiment analysis. Moreover, specialized embeddings also exist for tasks like topic modeling or named entity disambiguation. We study if we can complement these universal embeddings with specialized embeddings. We conduct an in-depth evaluation of nine well known natural language understanding tasks with SentEval. Also, we extend SentEval with two additional tasks to the medical domain. We present PubMedSection, a novel topic classification dataset focussed on the biomedical domain. Our comprehensive analysis covers 11 tasks and combinations of six embeddings. We report that combined embeddings outperform state of the art universal embeddings without any embedding fine-tuning. We observe that adding topic model based embeddings helps for most tasks and that differing pre-training tasks encode complementary features. Moreover, we present new state of the art results on the MPQA and SUBJ tasks in SentEval.
TrainX – Named Entity Linking with Active Sampling and Bi-Encoders
Tom Oberhauser | Tim Bischoff | Karl Brendel | Maluna Menke | Tobias Klatt | Amy Siu | Felix Alexander Gers | Alexander Löser
Proceedings of the 28th International Conference on Computational Linguistics: System Demonstrations
Tom Oberhauser | Tim Bischoff | Karl Brendel | Maluna Menke | Tobias Klatt | Amy Siu | Felix Alexander Gers | Alexander Löser
Proceedings of the 28th International Conference on Computational Linguistics: System Demonstrations
We demonstrate TrainX, a system for Named Entity Linking for medical experts. It combines state-of-the-art entity recognition and linking architectures, such as Flair and fine-tuned Bi-Encoders based on BERT, with an easy-to-use interface for healthcare professionals. We support medical experts in annotating training data by using active sampling strategies to forward informative samples to the annotator. We demonstrate that our model is capable of linking against large knowledge bases, such as UMLS (3.6 million entities), and supporting zero-shot cases, where the linker has never seen the entity before. Those zero-shot capabilities help to mitigate the problem of rare and expensive training data that is a common issue in the medical domain.
Search
Fix author
Co-authors
- Paul Grundmann 5
- Alexander Loeser 5
- Alexander Löser 5
- Tom Oberhauser 3
- Jens-Michalis Papaioannou 3
- Betty Van Aken 3
- Wolfgang Nejdl 2
- Amy Siu 2
- Benjamin Winter 2
- Tim Bischoff 1
- Karl Brendel 1
- Keno Bressem 1
- Klemens Budde 1
- Georgios Eleftheriadis 1
- Dennis Fast 1
- Alexei Figueroa 1
- Alexei Gustavo Figueroa Rosero 1
- Nancy Katerina Figueroa Rosero 1
- Jan Frick 1
- George Giannakoulas 1
- Tobias Klatt 1
- Ralf Krestel 1
- Ilias Kyparissidis 1
- Manuel Mayrdorfer 1
- Maluna Menke 1
- Marcel Naik 1
- Alexei Figueroa Rosero 1
- Tom Röhr 1
- Athanasios Samaras 1
- Rudolf Schneider 1
- Steffen Staab 1
- Thomas Maximilian Josef Steffek 1
- Thomas Steffek 1
- Roman Teucher 1
- Peter Troeger 1