Martin Boeker
2026
Developing the German Medical Text Corpus (GeMTeX): Legal Compliance and Semantic Enrichment
Justin Hofenbitzer | Christina Lohr | Andrea Riedel | Rebekka Kiser | Aliaksandra Shutsko | Abanoub Abdelmalak | Peter Klügl | Jutta Romberg | Sarah Riepenhausen | Miriam Schechner | Jakob Faller | Frank Meineke | Luise Modersohn | Markus Löffler | Juliane Fluck | Udo Hahn | Stefan Schulz | Martin Boeker
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Justin Hofenbitzer | Christina Lohr | Andrea Riedel | Rebekka Kiser | Aliaksandra Shutsko | Abanoub Abdelmalak | Peter Klügl | Jutta Romberg | Sarah Riepenhausen | Miriam Schechner | Jakob Faller | Frank Meineke | Luise Modersohn | Markus Löffler | Juliane Fluck | Udo Hahn | Stefan Schulz | Martin Boeker
Proceedings of the Fifteenth Language Resources and Evaluation Conference
GeMTeX is a large-scale German Medical Text Corpus project with the goal to publish a clinical national reference corpus. The resource is currently under construction and comprises, as of February 2026, more than 15k clinical documents (20M tokens) from six German university hospitals. When building GeMTeX, attention was paid to comply with European regulatory requirements. In phase I, patients were asked to allow reuse of their clinical documents based on the legal foundation of an "informed consent". In phase II, consented documents from six major clinical sites in Germany underwent a thorough de-identification process. In phase III, we currently enrich this unlocked dataset with semantic information from the clinical domain. This annotation process is guided by Snomed CT, which supports to directly ground expressions within clinical documents in a worldwide shared medical documentation and ontology standard. The resource is currently under active development and is accessible upon request under controlled access conditions. We refer interested researchers to visit https://kiinformatik.mri.tum.de/en/gemtex or reach out via gemtex.mi@mh.tum.de.
2025
GerMedIQ: A Resource for Simulated and Synthesized Anamnesis Interview Responses in German
Justin Hofenbitzer | Sebastian Schöning | Sebastian Belle | Jacqueline Lammert | Luise Modersohn | Martin Boeker | Diego Frassinelli
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Justin Hofenbitzer | Sebastian Schöning | Sebastian Belle | Jacqueline Lammert | Luise Modersohn | Martin Boeker | Diego Frassinelli
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Due to strict privacy regulations, text corpora in non-English clinical contexts are scarce. Consequently, synthetic data generation using Large Language Models (LLMs) emerges as a promising strategy to address this data gap. To evaluate the ability of LLMs in generating synthetic data, we applied them to our novel German Medical Interview Questions Corpus (GerMedIQ), which consists of 4,524 unique, simulated question-response pairs in German. We augmented our corpus by prompting 18 different LLMs to generate responses to the same questions. Structural and semantic evaluations of the generated responses revealed that large-sized language models produced responses comparable to those provided by humans. Additionally, an LLM-as-a-judge study, combined with a human baseline experiment assessing response acceptability, demonstrated that human raters preferred the responses generated by Mistral (124B) over those produced by humans. Nonetheless, our findings indicate that using LLMs for data augmentation in non-English clinical contexts requires caution.
2024
GottBERT: a pure German Language Model
Raphael Scheible | Johann Frei | Fabian Thomczyk | Henry He | Patric Tippmann | Jochen Knaus | Victor Jaravine | Frank Kramer | Martin Boeker
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Raphael Scheible | Johann Frei | Fabian Thomczyk | Henry He | Patric Tippmann | Jochen Knaus | Victor Jaravine | Frank Kramer | Martin Boeker
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Pre-trained language models have significantly advanced natural language processing (NLP), especially with the introduction of BERT and its optimized version, RoBERTa. While initial research focused on English, single-language models can be advantageous compared to multilingual ones in terms of pre-training effort, overall resource efficiency or downstream task performance. Despite the growing popularity of prompt-based LLMs, more compute-efficient BERT-like models remain highly relevant. In this work, we present the first German single-language RoBERTa model, GottBERT, pre-trained exclusively on the German portion of the OSCAR dataset. Additionally, we investigated the impact of filtering the OSCAR corpus. GottBERT was pre-trained using fairseq and standard hyperparameters. We evaluated its performance on two Named Entity Recognition (NER) tasks (Conll 2003 and GermEval 2014) and three text classification tasks (GermEval 2018 fine and coarse, and 10kGNAD) against existing German BERT models and two multilingual models. Performance was measured using the F1 score and accuracy. The GottBERT base and large models showed competitive performance, with GottBERT leading among the base models in 4 of 6 tasks. Contrary to our expectation, the applied filtering did not significantly affect the results. To support the German NLP research community, we are releasing the GottBERT models under the MIT license.
Search
Fix author
Co-authors
- Justin Hofenbitzer 2
- Luise Modersohn 2
- Abanoub Abdelmalak 1
- Sebastian Belle 1
- Jakob Faller 1
- Juliane Fluck 1
- Diego Frassinelli 1
- Johann Frei 1
- Udo Hahn 1
- Henry He 1
- Victor Jaravine 1
- Rebekka Kiser 1
- Peter Klügl 1
- Jochen Knaus 1
- Frank Kramer 1
- Jacqueline Lammert 1
- Christina Lohr 1
- Markus Löffler 1
- Frank Meineke 1
- Andrea Riedel 1
- Sarah Riepenhausen 1
- Jutta Romberg 1
- Miriam Schechner 1
- Raphael Schmitt 1
- Stefan Schulz 1
- Sebastian Schöning 1
- Aliaksandra Shutsko 1
- Fabian Thomczyk 1
- Patric Tippmann 1