2025
pdf
bib
abs
DMIS Lab at ArchEHR-QA 2025: Evidence-Grounded Answer Generation for EHR-based QA via a Multi-Agent Framework
Hyeon Hwang
|
Hyeongsoon Hwang
|
Jongmyung Jung
|
Jaehoon Yun
|
Minju Song
|
Yein Park
|
Dain Kim
|
Taewhoo Lee
|
Jiwoong Sohn
|
Chanwoong Yoon
|
Sihyeon Park
|
Jiwoo Lee
|
Heechul Yang
|
Jaewoo Kang
BioNLP 2025 Shared Tasks
The increasing utilization of patient portals has amplified clinicians’ workloads, primarily due to the necessity of addressing detailed patient inquiries related to their health concerns. The ArchEHR-QA 2025 shared task aims to alleviate this burden by automatically generating accurate, evidence-grounded responses to patients’ questions based on their Electronic Health Records (EHRs). This paper presents a six-stage multi-agent framework specifically developed to identify essential clinical sentences for answering patient questions, leveraging large language models (LLMs). Our approach begins with OpenAI’s o3 model generating focused medical context to guide downstream reasoning. In the subsequent stages, GPT-4.1-based agents assess the relevance of individual sentences, recruit domain experts, and consolidate their judgments to identify essential information for constructing coherent, evidence-grounded responses. Our framework achieved an Overall Factuality score of 62.0 and an Overall Relevance Score of 52.9 on the development set, and corresponding scores of 58.6 and 48.8, respectively, on the test set.
2024
pdf
bib
abs
KU-DMIS at EHRSQL 2024 : Generating SQL query via question templatization in EHR
Hajung Kim
|
Chanhwi Kim
|
Hoonick Lee
|
Kyochul Jang
|
Jiwoo Lee
|
Kyungjae Lee
|
Gangwoo Kim
|
Jaewoo Kang
Proceedings of the 6th Clinical Natural Language Processing Workshop
Transforming natural language questions into SQL queries is crucial for precise data retrieval from electronic health record (EHR) databases. A significant challenge in this process is detecting and rejecting unanswerable questions that request information outside the database’s scope or exceed the system’s capabilities. In this paper, we introduce a novel text-to-SQL framework that focuses on standardizing the structure of questions into a templated format. Our framework begins by fine-tuning GPT-3.5-turbo, a powerful large language model (LLM), with detailed prompts involving the table schemas of the EHR database system. Our approach shows promising results on the EHRSQL-2024 benchmark dataset, part of the ClinicalNLP shared task. Although fine-tuning GPT achieves third place on the development set, it struggled with the diverse questions in the test set. With our framework, we improve our system’s adaptability and achieve fourth position in the official leaderboard of the EHRSQL-2024 challenge.
pdf
bib
abs
MultiPragEval: Multilingual Pragmatic Evaluation of Large Language Models
Dojun Park
|
Jiwoo Lee
|
Seohyun Park
|
Hyeyun Jeong
|
Youngeun Koo
|
Soonha Hwang
|
Seonwoo Park
|
Sungeun Lee
Proceedings of the 2nd GenBench Workshop on Generalisation (Benchmarking) in NLP
As the capabilities of Large Language Models (LLMs) expand, it becomes increasingly important to evaluate them beyond basic knowledge assessment, focusing on higher-level language understanding. This study introduces MultiPragEval, the first multilingual pragmatic evaluation of LLMs, designed for English, German, Korean, and Chinese. Comprising 1200 question units categorized according to Grice’s Cooperative Principle and its four conversational maxims, MultiPragEval enables an in-depth assessment of LLMs’ contextual awareness and their ability to infer implied meanings. Our findings demonstrate that Claude3-Opus significantly outperforms other models in all tested languages, establishing a state-of-the-art in the field. Among open-source models, Solar-10.7B and Qwen1.5-14B emerge as strong competitors. By analyzing pragmatic inference, we provide valuable insights into the capabilities essential for advanced language comprehension in AI systems.
pdf
bib
Pragmatic Competence Evaluation of Large Language Models for the Korean Language
Dojun Park
|
Jiwoo Lee
|
Hyeyun Jeong
|
Seohyun Park
|
Sungeun Lee
Proceedings of the 38th Pacific Asia Conference on Language, Information and Computation