Inchul Hwang
2019
VAE-PGN based Abstractive Model in Multi-stage Architecture for Text Summarization
Hyungtak Choi
|
Lohith Ravuru
|
Tomasz Dryjański
|
Sunghan Rye
|
Donghyun Lee
|
Hojung Lee
|
Inchul Hwang
Proceedings of the 12th International Conference on Natural Language Generation
This paper describes our submission to the TL;DR challenge. Neural abstractive summarization models have been successful in generating fluent and consistent summaries with advancements like the copy (Pointer-generator) and coverage mechanisms. However, these models suffer from their extractive nature as they learn to copy words from the source text. In this paper, we propose a novel abstractive model based on Variational Autoencoder (VAE) to address this issue. We also propose a Unified Summarization Framework for the generation of summaries. Our model eliminates non-critical information at a sentence-level with an extractive summarization module and generates the summary word by word using an abstractive summarization module. To implement our framework, we combine submodules with state-of-the-art techniques including Pointer-Generator Network (PGN) and BERT while also using our new VAE-PGN abstractive model. We evaluate our model on the benchmark Reddit corpus as part of the TL;DR challenge and show that our model outperforms the baseline in ROUGE score while generating diverse summaries.
2018
Self-Learning Architecture for Natural Language Generation
Hyungtak Choi
|
Siddarth K.M.
|
Haehun Yang
|
Heesik Jeon
|
Inchul Hwang
|
Jihie Kim
Proceedings of the 11th International Conference on Natural Language Generation
In this paper, we propose a self-learning architecture for generating natural language templates for conversational assistants. Generating templates to cover all the combinations of slots in an intent is time consuming and labor-intensive. We examine three different models based on our proposed architecture - Rule-based model, Sequence-to-Sequence (Seq2Seq) model and Semantically Conditioned LSTM (SC-LSTM) model for the IoT domain - to reduce the human labor required for template generation. We demonstrate the feasibility of template generation for the IoT domain using our self-learning architecture. In both automatic and human evaluation, the self-learning architecture outperforms previous works trained with a fully human-labeled dataset. This is promising for commercial conversational assistant solutions.
Search
Co-authors
- Hyungtak Choi 2
- Lohith Ravuru 1
- Tomasz Dryjański 1
- Sunghan Rye 1
- Donghyun Lee 1
- show all...
Venues
- inlg2