Eojin Jeon
2023
DIVE: Towards Descriptive and Diverse Visual Commonsense Generation
Jun-Hyung Park
|
Hyuntae Park
|
Youjin Kang
|
Eojin Jeon
|
SangKeun Lee
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Towards human-level visual understanding, visual commonsense generation has been introduced to generate commonsense inferences beyond images. However, current research on visual commonsense generation has overlooked an important human cognitive ability: generating descriptive and diverse inferences. In this work, we propose a novel visual commonsense generation framework, called DIVE, which aims to improve the descriptiveness and diversity of generated inferences. DIVE involves two methods, generic inference filtering and contrastive retrieval learning, which address the limitations of existing visual commonsense resources and training objectives. Experimental results verify that DIVE outperforms state-of-the-art models for visual commonsense generation in terms of both descriptiveness and diversity, while showing a superior quality in generating unique and novel inferences. Notably, DIVE achieves human-level descriptiveness and diversity on Visual Commonsense Graphs. Furthermore, human evaluations confirm that DIVE aligns closely with human judgments on descriptiveness and diversity.
Improving Bias Mitigation through Bias Experts in Natural Language Understanding
Eojin Jeon
|
Mingyu Lee
|
Juhyeong Park
|
Yeachan Kim
|
Wing-Lam Mok
|
SangKeun Lee
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Biases in the dataset often enable the model to achieve high performance on in-distribution data, while poorly performing on out-of-distribution data. To mitigate the detrimental effect of the bias on the networks, previous works have proposed debiasing methods that down-weight the biased examples identified by an auxiliary model, which is trained with explicit bias labels. However, finding a type of bias in datasets is a costly process. Therefore, recent studies have attempted to make the auxiliary model biased without the guidance (or annotation) of bias labels, by constraining the model’s training environment or the capability of the model itself. Despite the promising debiasing results of recent works, the multi-class learning objective, which has been naively used to train the auxiliary model, may harm the bias mitigation effect due to its regularization effect and competitive nature across classes. As an alternative, we propose a new debiasing framework that introduces binary classifiers between the auxiliary model and the main model, coined bias experts. Specifically, each bias expert is trained on a binary classification task derived from the multi-class classification task via the One-vs-Rest approach. Experimental results demonstrate that our proposed strategy improves the bias identification ability of the auxiliary model. Consequently, our debiased model consistently outperforms the state-of-the-art on various challenge datasets.
2022
Break it Down into BTS: Basic, Tiniest Subword Units for Korean
Nayeon Kim
|
Jun-Hyung Park
|
Joon-Young Choi
|
Eojin Jeon
|
Youjin Kang
|
SangKeun Lee
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
We introduce Basic, Tiniest Subword (BTS) units for the Korean language, which are inspired by the invention principle of Hangeul, the Korean writing system. Instead of relying on 51 Korean consonant and vowel letters, we form the letters from BTS units by adding strokes or combining them. To examine the impact of BTS units on Korean language processing, we develop a novel BTS-based word embedding framework that is readily applicable to various models. Our experiments reveal that BTS units significantly improve the performance of Korean word embedding on all intrinsic and extrinsic tasks in our evaluation. In particular, BTS-based word embedding outperforms the state-of-theart Korean word embedding by 11.8% in word analogy. We further investigate the unique advantages provided by BTS units through indepth analysis.
Search
Co-authors
- SangKeun Lee 3
- Jun-Hyung Park 2
- Youjin Kang 2
- Hyuntae Park 1
- Mingyu Lee 1
- show all...