Hyeondey Kim
2022
Korean Language Modeling via Syntactic Guide
Hyeondey Kim
|
Seonhoon Kim
|
Inho Kang
|
Nojun Kwak
|
Pascale Fung
Proceedings of the Thirteenth Language Resources and Evaluation Conference
While pre-trained language models play a vital role in modern language processing tasks, but not every language can benefit from them. Most existing research on pre-trained language models focuses primarily on widely-used languages such as English, Chinese, and Indo-European languages. Additionally, such schemes usually require extensive computational resources alongside a large amount of data, which is infeasible for less-widely used languages. We aim to address this research niche by building a language model that understands the linguistic phenomena in the target language which can be trained with low-resources. In this paper, we discuss Korean language modeling, specifically methods for language representation and pre-training methods. With our Korean-specific language representation, we are able to build more powerful language models for Korean understanding, even with fewer resources. The paper proposes chunk-wise reconstruction of the Korean language based on a widely used transformer architecture and bidirectional language representation. We also introduce morphological features such as Part-of-Speech (PoS) into the language understanding by leveraging such information during the pre-training. Our experiment results prove that the proposed methods improve the model performance of the investigated Korean language understanding tasks.
2019
Generalizing Question Answering System with Pre-trained Language Model Fine-tuning
Dan Su
|
Yan Xu
|
Genta Indra Winata
|
Peng Xu
|
Hyeondey Kim
|
Zihan Liu
|
Pascale Fung
Proceedings of the 2nd Workshop on Machine Reading for Question Answering
With a large number of datasets being released and new techniques being proposed, Question answering (QA) systems have witnessed great breakthroughs in reading comprehension (RC)tasks. However, most existing methods focus on improving in-domain performance, leaving open the research question of how these mod-els and techniques can generalize to out-of-domain and unseen RC tasks. To enhance the generalization ability, we propose a multi-task learning framework that learns the shared representation across different tasks. Our model is built on top of a large pre-trained language model, such as XLNet, and then fine-tuned on multiple RC datasets. Experimental results show the effectiveness of our methods, with an average Exact Match score of 56.59 and an average F1 score of 68.98, which significantly improves the BERT-Large baseline by8.39 and 7.22, respectively
Search
Co-authors
- Pascale Fung 2
- Dan Su 1
- Yan Xu 1
- Genta Indra Winata 1
- Peng Xu 1
- show all...