Minseok Kim
2024
Korean Bio-Medical Corpus (KBMC) for Medical Named Entity Recognition
Sungjoo Byun
|
Jiseung Hong
|
Sumin Park
|
Dongjun Jang
|
Jean Seo
|
Minseok Kim
|
Chaeyoung Oh
|
Hyopil Shin
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Named Entity Recognition (NER) plays a pivotal role in medical Natural Language Processing (NLP). Yet, there has not been an open-source medical NER dataset specifically for the Korean language. To address this, we utilized ChatGPT to assist in constructing the KBMC (Korean Bio-Medical Corpus), which we are now presenting to the public. With the KBMC dataset, we noticed an impressive 20% increase in medical NER performance compared to models trained on general Korean NER datasets. This research underscores the significant benefits and importance of using specialized tools and datasets, like ChatGPT, to enhance language processing in specialized fields such as healthcare.
Aligning Large Language Models via Fine-grained Supervision
Dehong Xu
|
Liang Qiu
|
Minseok Kim
|
Faisal Ladhak
|
Jaeyoung Do
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Pre-trained large-scale language models (LLMs) excel at producing coherent articles, yet their outputs may be untruthful, toxic, or fail to align with user expectations. Current approaches focus on using reinforcement learning with human feedback (RLHF) to improve model alignment, which works by transforming coarse human preferences of LLM outputs into a feedback signal that guides the model learning process. However, because this approach operates on sequence-level feedback, it lacks the precision to identify the exact parts of the output affecting user preferences. To address this gap, we propose a method to enhance LLM alignment through fine-grained token-level supervision. Specifically, we ask annotators to minimally edit less preferred responses within the standard reward modeling dataset to make them more favorable, ensuring changes are made only where necessary while retaining most of the original content. The refined dataset is used to train a token-level reward model, which is then used for training our fine-grained Proximal Policy Optimization (PPO) model. Our experiment results demonstrate that this approach can improve LLM performance by up to 5.1% in terms of win rate against the reference model, compared with the traditional PPO model.
Search
Co-authors
- Sungjoo Byun 1
- Jiseung Hong 1
- Sumin Park 1
- Dongjun Jang 1
- Jean Seo 1
- show all...