Juho Lee


2021

pdf
Learning to Perturb Word Embeddings for Out-of-distribution QA
Seanie Lee | Minki Kang | Juho Lee | Sung Ju Hwang
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

QA models based on pretrained language models have achieved remarkable performance on various benchmark datasets. However, QA models do not generalize well to unseen data that falls outside the training distribution, due to distributional shifts. Data augmentation (DA) techniques which drop/replace words have shown to be effective in regularizing the model from overfitting to the training data. Yet, they may adversely affect the QA tasks since they incur semantic changes that may lead to wrong answers for the QA task. To tackle this problem, we propose a simple yet effective DA method based on a stochastic noise generator, which learns to perturb the word embedding of the input questions and context without changing their semantics. We validate the performance of the QA models trained with our word embedding perturbation on a single source dataset, on five different target domains. The results show that our method significantly outperforms the baseline DA methods. Notably, the model trained with ours outperforms the model trained with more than 240K artificially generated QA pairs.

2019

pdf
Learning with Limited Data for Multilingual Reading Comprehension
Kyungjae Lee | Sunghyun Park | Hojae Han | Jinyoung Yeo | Seung-won Hwang | Juho Lee
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

This paper studies the problem of supporting question answering in a new language with limited training resources. As an extreme scenario, when no such resource exists, one can (1) transfer labels from another language, and (2) generate labels from unlabeled data, using translator and automatic labeling function respectively. However, these approaches inevitably introduce noises to the training data, due to translation or generation errors, which require a judicious use of data with varying confidence. To address this challenge, we propose a weakly-supervised framework that quantifies such noises from automatically generated labels, to deemphasize or fix noisy data in training. On reading comprehension task, we demonstrate the effectiveness of our model on low-resource languages with varying similarity to English, namely, Korean and French.

2004

pdf
Korean-Chinese-Japanese Multilingual Wordnet with Shared Semantic Hierarchy
Key-Sun Choi | Hee-Sook Bae | Wonseok Kang | Juho Lee | Eunhe Kim | Hekyeong Kim | Donghee Kim | Youngbin Song | Hyosik Shin
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

2001

pdf
A Korean Noun Semantic Hierarchy (Wordnet) Construction
Juho Lee | Koaunghi Un | Hee-Sook Bae | Key-Sun Choi
Proceedings of the 16th Pacific Asia Conference on Language, Information and Computation