Minju Song
2025
DMIS Lab at ArchEHR-QA 2025: Evidence-Grounded Answer Generation for EHR-based QA via a Multi-Agent Framework
Hyeon Hwang | Hyeongsoon Hwang | Jongmyung Jung | Jaehoon Yun | Minju Song | Yein Park | Dain Kim | Taewhoo Lee | Jiwoong Sohn | Chanwoong Yoon | Sihyeon Park | Jiwoo Lee | Heechul Yang | Jaewoo Kang
Proceedings of the 24th Workshop on Biomedical Language Processing (Shared Tasks)
Hyeon Hwang | Hyeongsoon Hwang | Jongmyung Jung | Jaehoon Yun | Minju Song | Yein Park | Dain Kim | Taewhoo Lee | Jiwoong Sohn | Chanwoong Yoon | Sihyeon Park | Jiwoo Lee | Heechul Yang | Jaewoo Kang
Proceedings of the 24th Workshop on Biomedical Language Processing (Shared Tasks)
Assessing LLM Reasoning Steps via Principal Knowledge Grounding
Hyeon Hwang | Yewon Cho | Chanwoong Yoon | Yein Park | Minju Song | Kyungjae Lee | Gangwoo Kim | Jaewoo Kang
Findings of the Association for Computational Linguistics: EMNLP 2025
Hyeon Hwang | Yewon Cho | Chanwoong Yoon | Yein Park | Minju Song | Kyungjae Lee | Gangwoo Kim | Jaewoo Kang
Findings of the Association for Computational Linguistics: EMNLP 2025
Step-by-step reasoning has become a standard approach for large language models (LLMs) to tackle complex tasks. While this paradigm has proven effective, it raises a fundamental question: How can we verify that an LLM’s reasoning is accurately grounded in knowledge? To address this question, we introduce a novel evaluation suite that systematically assesses the knowledge grounding of intermediate reasoning. Our framework comprises three key components. (1) Principal Knowledge Collection, a large-scale repository of atomic knowledge essential for reasoning. Based on the collection, we propose (2) knowledge-grounded evaluation metrics designed to measure how well models recall and apply prerequisite knowledge in reasoning. These metrics are computed by our (3) evaluator LLM, a lightweight model optimized for cost-effective and reliable metric computation. Our evaluation suite demonstrates remarkable effectiveness in identifying missing or misapplied knowledge elements, providing crucial insights for uncovering fundamental reasoning deficiencies in LLMs. Beyond evaluation, we demonstrate how these metrics can be integrated into preference optimization, showcasing further applications of knowledge-grounded evaluation. Our evaluation suite is publicly available.
ETHIC: Evaluating Large Language Models on Long-Context Tasks with High Information Coverage
Taewhoo Lee | Chanwoong Yoon | Kyochul Jang | Donghyeon Lee | Minju Song | Hyunjae Kim | Jaewoo Kang
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Taewhoo Lee | Chanwoong Yoon | Kyochul Jang | Donghyeon Lee | Minju Song | Hyunjae Kim | Jaewoo Kang
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Recent advancements in large language models (LLM) capable of processing extremely long texts highlight the need for a dedicated evaluation benchmark to assess their long-context capabilities. However, existing methods, like the needle-in-a-haystack test, do not effectively assess whether these models fully utilize contextual information, raising concerns about the reliability of current evaluation techniques. To thoroughly examine the effectiveness of existing benchmarks, we introduce a new metric called information coverage (IC), which quantifies the proportion of the input context necessary for answering queries. Our findings indicate that current benchmarks exhibit low IC; although the input context may be extensive, the actual usable context is often limited. To address this, we present ETHIC, a novel benchmark designed to assess LLMs’ ability to leverage the entire context. Our benchmark comprises 1,986 test instances spanning four long-context tasks with high IC scores in the domains of books, debates, medicine, and law. Our evaluations reveal significant performance drops in contemporary LLMs, highlighting a critical challenge in managing long contexts. Our benchmark is available at https://github.com/dmis-lab/ETHIC.