Yonghui Wu
Unverified author pages with similar names: Yonghui Wu
2024
UF-HOBI at “Discharge Me!”: A Hybrid Solution for Discharge Summary Generation Through Prompt-based Tuning of GatorTronGPT Models
Mengxian Lyu | Cheng Peng | Daniel Paredes | Ziyi Chen | Aokun Chen | Jiang Bian | Yonghui Wu
Proceedings of the 23rd Workshop on Biomedical Natural Language Processing
Mengxian Lyu | Cheng Peng | Daniel Paredes | Ziyi Chen | Aokun Chen | Jiang Bian | Yonghui Wu
Proceedings of the 23rd Workshop on Biomedical Natural Language Processing
Automatic generation of discharge summaries presents significant challenges due to the length of clinical documentation, the dispersed nature of patient information, and the diverse terminology used in healthcare. This paper presents a hybrid solution for generating discharge summary sections as part of our participation in the “Discharge Me!” Challenge at the BioNLP 2024 Shared Task. We developed a two-stage generation method using both extractive and abstractive techniques, in which we first apply name entity recognition (NER) to extract key clinical concepts, which are then used as input for a prompt-tuning based GatorTronGPT model to generate coherent text for two important sections including “Brief Hospital Course” and “Discharge Instructions”. Our system was ranked 5th in this challenge, achieving an overall score of 0.284. The results demonstrate the effectiveness of our hybrid solution in improving the quality of automated discharge section generation.
Comprehensive Study on German Language Models for Clinical and Biomedical Text Understanding
Ahmad Idrissi-Yaghir | Amin Dada | Henning Schäfer | Kamyar Arzideh | Giulia Baldini | Jan Trienes | Max Hasin | Jeanette Bewersdorff | Cynthia S. Schmidt | Marie Bauer | Kaleb E. Smith | Jiang Bian | Yonghui Wu | Jörg Schlötterer | Torsten Zesch | Peter A. Horn | Christin Seifert | Felix Nensa | Jens Kleesiek | Christoph M. Friedrich
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Ahmad Idrissi-Yaghir | Amin Dada | Henning Schäfer | Kamyar Arzideh | Giulia Baldini | Jan Trienes | Max Hasin | Jeanette Bewersdorff | Cynthia S. Schmidt | Marie Bauer | Kaleb E. Smith | Jiang Bian | Yonghui Wu | Jörg Schlötterer | Torsten Zesch | Peter A. Horn | Christin Seifert | Felix Nensa | Jens Kleesiek | Christoph M. Friedrich
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Recent advances in natural language processing (NLP) can be largely attributed to the advent of pre-trained language models such as BERT and RoBERTa. While these models demonstrate remarkable performance on general datasets, they can struggle in specialized domains such as medicine, where unique domain-specific terminologies, domain-specific abbreviations, and varying document structures are common. This paper explores strategies for adapting these models to domain-specific requirements, primarily through continuous pre-training on domain-specific data. We pre-trained several German medical language models on 2.4B tokens derived from translated public English medical data and 3B tokens of German clinical data. The resulting models were evaluated on various German downstream tasks, including named entity recognition (NER), multi-label classification, and extractive question answering. Our results suggest that models augmented by clinical and translation-based pre-training typically outperform general domain models in medical contexts. We conclude that continuous pre-training has demonstrated the ability to match or even exceed the performance of clinical models trained from scratch. Furthermore, pre-training on clinical data or leveraging translated texts have proven to be reliable methods for domain adaptation in medical NLP tasks.
2023
On the Impact of Cross-Domain Data on German Language Models
Amin Dada | Aokun Chen | Cheng Peng | Kaleb Smith | Ahmad Idrissi-Yaghir | Constantin Seibold | Jianning Li | Lars Heiliger | Christoph Friedrich | Daniel Truhn | Jan Egger | Jiang Bian | Jens Kleesiek | Yonghui Wu
Findings of the Association for Computational Linguistics: EMNLP 2023
Amin Dada | Aokun Chen | Cheng Peng | Kaleb Smith | Ahmad Idrissi-Yaghir | Constantin Seibold | Jianning Li | Lars Heiliger | Christoph Friedrich | Daniel Truhn | Jan Egger | Jiang Bian | Jens Kleesiek | Yonghui Wu
Findings of the Association for Computational Linguistics: EMNLP 2023
Traditionally, large language models have been either trained on general web crawls or domain-specific data. However, recent successes of generative large language models, have shed light on the benefits of cross-domain datasets. To examine the significance of prioritizing data diversity over quality, we present a German dataset comprising texts from five domains, along with another dataset aimed at containing high-quality data. Through training a series of models ranging between 122M and 750M parameters on both datasets, we conduct a comprehensive benchmark on multiple downstream tasks. Our findings demonstrate that the models trained on the cross-domain dataset outperform those trained on quality data alone, leading to improvements up to 4.45% over the previous state-of-the-art.
2016
UTHealth at SemEval-2016 Task 12: an End-to-End System for Temporal Information Extraction from Clinical Notes
Hee-Jin Lee | Hua Xu | Jingqi Wang | Yaoyun Zhang | Sungrim Moon | Jun Xu | Yonghui Wu
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)
Hee-Jin Lee | Hua Xu | Jingqi Wang | Yaoyun Zhang | Sungrim Moon | Jun Xu | Yonghui Wu
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)
2015
UTH-CCB: The Participation of the SemEval 2015 Challenge – Task 14
Jun Xu | Yaoyun Zhang | Jingqi Wang | Yonghui Wu | Min Jiang | Ergin Soysal | Hua Xu
Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015)
Jun Xu | Yaoyun Zhang | Jingqi Wang | Yonghui Wu | Min Jiang | Ergin Soysal | Hua Xu
Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015)
Clinical Abbreviation Disambiguation Using Neural Word Embeddings
Yonghui Wu | Jun Xu | Yaoyun Zhang | Hua Xu
Proceedings of BioNLP 15
Yonghui Wu | Jun Xu | Yaoyun Zhang | Hua Xu
Proceedings of BioNLP 15
2014
Search
Fix author
Co-authors
- Hua Xu 4
- Yaoyun Zhang 4
- Jiang Bian 3
- Jingqi Wang 3
- Jun Xu 3
- Aokun Chen 2
- Amin Dada 2
- Ahmad Idrissi-Yaghir 2
- Min Jiang 2
- Jens Kleesiek 2
- Cheng Peng 2
- Kamyar Arzideh 1
- Giulia Baldini 1
- Marie Bauer 1
- Jeanette Bewersdorff 1
- Ziyi Chen 1
- Yukun Chen 1
- Jan Egger 1
- Christoph Friedrich 1
- Christoph M. Friedrich 1
- Max Hasin 1
- Lars Heiliger 1
- Peter A. Horn 1
- Hee-Jin Lee 1
- Jianning Li 1
- Mengxian Lyu 1
- Sungrim Moon 1
- Felix Nensa 1
- Daniel Paredes 1
- Jörg Schlötterer 1
- Cynthia S. Schmidt 1
- Henning Schäfer 1
- Constantin Seibold 1
- Christin Seifert 1
- Kaleb Smith 1
- Kaleb E. Smith 1
- Ergin Soysal 1
- Buzhou Tang 1
- Jan Trienes 1
- Daniel Truhn 1
- Torsten Zesch 1