Mun Yong Yi
2025
Leveraging LLM-Generated Schema Descriptions for Unanswerable Question Detection in Clinical Data
Donghee Han
|
Seungjae Lim
|
Daeyoung Roh
|
Sangryul Kim
|
Sehyun Kim
|
Mun Yong Yi
Proceedings of the 31st International Conference on Computational Linguistics
Recent advancements in large language models (LLMs) have boosted research on generating SQL queries from domain-specific questions, particularly in the medical domain. A key challenge is detecting and filtering unanswerable questions. Existing methods often relying on model uncertainty, but these require extra resources and lack interpretability. We propose a lightweight model that predicts relevant database schemas to detect unanswerable questions, enhancing interpretability and addressing the data imbalance in binary classification tasks. Furthermore, we found that LLM-generated schema descriptions can significantly enhance the prediction accuracy. Our method provides a resource-efficient solution for unanswerable question detection in domain-specific question answering systems.
Rationale Behind Essay Scores: Enhancing S-LLM’s Multi-Trait Essay Scoring with Rationale Generated by LLMs
SeongYeub Chu
|
Jong Woo Kim
|
Bryan Wong
|
Mun Yong Yi
Findings of the Association for Computational Linguistics: NAACL 2025
Existing automated essay scoring (AES) has solely relied on essay text without using explanatory rationales for the scores, thereby forgoing an opportunity to capture the specific aspects evaluated by rubric indicators in a fine-grained manner. This paper introduces Rationale-based Multiple Trait Scoring (RMTS), a novel approach for multi-trait essay scoring that integrates prompt-engineering-based large language models (LLMs) with a fine-tuning-based essay scoring model using a smaller large language model (S-LLM). RMTS uses an LLM-based trait-wise rationale generation system where a separate LLM agent generates trait-specific rationales based on rubric guidelines, which the scoring model uses to accurately predict multi-trait scores. Extensive experiments on benchmark datasets, including ASAP, ASAP++, and Feedback Prize, show that RMTS significantly outperforms state-of-the-art models and vanilla S-LLMs in trait-specific scoring. By assisting quantitative assessment with fine-grained qualitative rationales, RMTS enhances the trait-wise reliability, providing partial explanations about essays. The code is available at https://github.com/BBeeChu/RMTS.git.
2014
Unsupervised Verb Inference from Nouns Crossing Root Boundary
Soon Gill Hong
|
Sin-Hee Cho
|
Mun Yong Yi
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers
Search
Fix data
Co-authors
- Sin-Hee Cho 1
- SeongYeub Chu 1
- Donghee Han 1
- Soon Gill Hong 1
- Sangryul Kim 1
- show all...