Jiwoo Lee
2025
DMIS Lab at ArchEHR-QA 2025: Evidence-Grounded Answer Generation for EHR-based QA via a Multi-Agent Framework
Hyeon Hwang | Hyeongsoon Hwang | Jongmyung Jung | Jaehoon Yun | Minju Song | Yein Park | Dain Kim | Taewhoo Lee | Jiwoong Sohn | Chanwoong Yoon | Sihyeon Park | Jiwoo Lee | Heechul Yang | Jaewoo Kang
Proceedings of the 24th Workshop on Biomedical Language Processing (Shared Tasks)
Hyeon Hwang | Hyeongsoon Hwang | Jongmyung Jung | Jaehoon Yun | Minju Song | Yein Park | Dain Kim | Taewhoo Lee | Jiwoong Sohn | Chanwoong Yoon | Sihyeon Park | Jiwoo Lee | Heechul Yang | Jaewoo Kang
Proceedings of the 24th Workshop on Biomedical Language Processing (Shared Tasks)
Learning from Negative Samples in Biomedical Generative Entity Linking
Chanhwi Kim | Hyunjae Kim | Sihyeon Park | Jiwoo Lee | Mujeen Sung | Jaewoo Kang
Findings of the Association for Computational Linguistics: ACL 2025
Chanhwi Kim | Hyunjae Kim | Sihyeon Park | Jiwoo Lee | Mujeen Sung | Jaewoo Kang
Findings of the Association for Computational Linguistics: ACL 2025
Generative models have become widely used in biomedical entity linking (BioEL) due to their excellent performance and efficient memory usage. However, these models are usually trained only with positive samples—entities that match the input mention’s identifier—and do not explicitly learn from hard negative samples, which are entities that look similar but have different meanings. To address this limitation, we introduce ANGEL (Learning from Negative Samples in Biomedical Generative Entity Linking), the first framework that trains generative BioEL models using negative samples. Specifically, a generative model is initially trained to generate positive entity names from the knowledge base for given input entities. Subsequently, both correct and incorrect outputs are gathered from the model’s top-k predictions. Finally, the model is updated to prioritize the correct predictions through preference optimization. Our models fine-tuned with ANGEL outperform the previous best baseline models by up to an average top-1 accuracy of 1.4% on five benchmarks. When incorporating our framework into pre-training, the performance improvement increases further to 1.7%, demonstrating its effectiveness in both the pre-training and fine-tuning stages. The code and model weights are available at https://github.com/dmis-lab/ANGEL.
2024
KU-DMIS at EHRSQL 2024 : Generating SQL query via question templatization in EHR
Hajung Kim | Chanhwi Kim | Hoonick Lee | Kyochul Jang | Jiwoo Lee | Kyungjae Lee | Gangwoo Kim | Jaewoo Kang
Proceedings of the 6th Clinical Natural Language Processing Workshop
Hajung Kim | Chanhwi Kim | Hoonick Lee | Kyochul Jang | Jiwoo Lee | Kyungjae Lee | Gangwoo Kim | Jaewoo Kang
Proceedings of the 6th Clinical Natural Language Processing Workshop
Transforming natural language questions into SQL queries is crucial for precise data retrieval from electronic health record (EHR) databases. A significant challenge in this process is detecting and rejecting unanswerable questions that request information outside the database’s scope or exceed the system’s capabilities. In this paper, we introduce a novel text-to-SQL framework that focuses on standardizing the structure of questions into a templated format. Our framework begins by fine-tuning GPT-3.5-turbo, a powerful large language model (LLM), with detailed prompts involving the table schemas of the EHR database system. Our approach shows promising results on the EHRSQL-2024 benchmark dataset, part of the ClinicalNLP shared task. Although fine-tuning GPT achieves third place on the development set, it struggled with the diverse questions in the test set. With our framework, we improve our system’s adaptability and achieve fourth position in the official leaderboard of the EHRSQL-2024 challenge.
MultiPragEval: Multilingual Pragmatic Evaluation of Large Language Models
Dojun Park | Jiwoo Lee | Seohyun Park | Hyeyun Jeong | Youngeun Koo | Soonha Hwang | Seonwoo Park | Sungeun Lee
Proceedings of the 2nd GenBench Workshop on Generalisation (Benchmarking) in NLP
Dojun Park | Jiwoo Lee | Seohyun Park | Hyeyun Jeong | Youngeun Koo | Soonha Hwang | Seonwoo Park | Sungeun Lee
Proceedings of the 2nd GenBench Workshop on Generalisation (Benchmarking) in NLP
As the capabilities of Large Language Models (LLMs) expand, it becomes increasingly important to evaluate them beyond basic knowledge assessment, focusing on higher-level language understanding. This study introduces MultiPragEval, the first multilingual pragmatic evaluation of LLMs, designed for English, German, Korean, and Chinese. Comprising 1200 question units categorized according to Grice’s Cooperative Principle and its four conversational maxims, MultiPragEval enables an in-depth assessment of LLMs’ contextual awareness and their ability to infer implied meanings. Our findings demonstrate that Claude3-Opus significantly outperforms other models in all tested languages, establishing a state-of-the-art in the field. Among open-source models, Solar-10.7B and Qwen1.5-14B emerge as strong competitors. By analyzing pragmatic inference, we provide valuable insights into the capabilities essential for advanced language comprehension in AI systems.
Search
Fix author
Co-authors
- Jaewoo Kang 3
- Hyeyun Jeong 2
- Chanhwi Kim 2
- Sungeun Lee 2
- Dojun Park 2
- Seohyun Park 2
- Sihyeon Park 2
- Soonha Hwang 1
- Hyeon Hwang 1
- Hyeongsoon Hwang 1
- Kyochul Jang 1
- Jongmyung Jung 1
- Hajung Kim 1
- Gangwoo Kim 1
- Dain Kim 1
- Hyunjae Kim 1
- Youngeun Koo 1
- Hoonick Lee 1
- Kyungjae Lee 1
- Taewhoo Lee 1
- Seonwoo Park 1
- Yein Park 1
- Jiwoong Sohn 1
- Minju Song 1
- Mujeen Sung 1
- Heechul Yang 1
- Chanwoong Yoon 1
- Jaehoon Yun 1