Inchul Hwang


2024

pdf
Improved Text Emotion Prediction Using Combined Valence and Arousal Ordinal Classification
Michail Mitsios | Georgios Vamvoukakis | Georgia Maniati | Nikolaos Ellinas | Georgios Dimitriou | Konstantinos Markopoulos | Panos Kakoulidis | Alexandra Vioni | Myrsini Christidou | Junkwang Oh | Gunu Jho | Inchul Hwang | Georgios Vardaxoglou | Aimilios Chalamandaris | Pirros Tsiakoulis | Spyros Raptis
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)

Emotion detection in textual data has received growing interest in recent years, as it is pivotal for developing empathetic human-computer interaction systems.This paper introduces a method for categorizing emotions from text, which acknowledges and differentiates between the diversified similarities and distinctions of various emotions.Initially, we establish a baseline by training a transformer-based model for standard emotion classification, achieving state-of-the-art performance. We argue that not all misclassifications are of the same importance, as there are perceptual similarities among emotional classes.We thus redefine the emotion labeling problem by shifting it from a traditional classification model to an ordinal classification one, where discrete emotions are arranged in a sequential order according to their valence levels.Finally, we propose a method that performs ordinal classification in the two-dimensional emotion space, considering both valence and arousal scales.The results show that our approach not only preserves high accuracy in emotion prediction but also significantly reduces the magnitude of errors in cases of misclassification.

2019

pdf
VAE-PGN based Abstractive Model in Multi-stage Architecture for Text Summarization
Hyungtak Choi | Lohith Ravuru | Tomasz Dryjański | Sunghan Rye | Donghyun Lee | Hojung Lee | Inchul Hwang
Proceedings of the 12th International Conference on Natural Language Generation

This paper describes our submission to the TL;DR challenge. Neural abstractive summarization models have been successful in generating fluent and consistent summaries with advancements like the copy (Pointer-generator) and coverage mechanisms. However, these models suffer from their extractive nature as they learn to copy words from the source text. In this paper, we propose a novel abstractive model based on Variational Autoencoder (VAE) to address this issue. We also propose a Unified Summarization Framework for the generation of summaries. Our model eliminates non-critical information at a sentence-level with an extractive summarization module and generates the summary word by word using an abstractive summarization module. To implement our framework, we combine submodules with state-of-the-art techniques including Pointer-Generator Network (PGN) and BERT while also using our new VAE-PGN abstractive model. We evaluate our model on the benchmark Reddit corpus as part of the TL;DR challenge and show that our model outperforms the baseline in ROUGE score while generating diverse summaries.

2018

pdf
Self-Learning Architecture for Natural Language Generation
Hyungtak Choi | Siddarth K.M. | Haehun Yang | Heesik Jeon | Inchul Hwang | Jihie Kim
Proceedings of the 11th International Conference on Natural Language Generation

In this paper, we propose a self-learning architecture for generating natural language templates for conversational assistants. Generating templates to cover all the combinations of slots in an intent is time consuming and labor-intensive. We examine three different models based on our proposed architecture - Rule-based model, Sequence-to-Sequence (Seq2Seq) model and Semantically Conditioned LSTM (SC-LSTM) model for the IoT domain - to reduce the human labor required for template generation. We demonstrate the feasibility of template generation for the IoT domain using our self-learning architecture. In both automatic and human evaluation, the self-learning architecture outperforms previous works trained with a fully human-labeled dataset. This is promising for commercial conversational assistant solutions.