Haehun Yang
2022
Multi-Domain Dialogue State Tracking By Neural-Retrieval Augmentation
Lohith Ravuru
|
Seonghan Ryu
|
Hyungtak Choi
|
Haehun Yang
|
Hyeonmok Ko
Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022
Dialogue State Tracking (DST) is a very complex task that requires precise understanding and information tracking of multi-domain conversations between users and dialogue systems. Many task-oriented dialogue systems use dialogue state tracking technology to infer users’ goals from the history of the conversation. Existing approaches for DST are usually conditioned on previous dialogue states. However, the dependency on previous dialogues makes it very challenging to prevent error propagation to subsequent turns of a dialogue. In this paper, we propose Neural Retrieval Augmentation to alleviate this problem by creating a Neural Index based on dialogue context. Our NRA-DST framework efficiently retrieves dialogue context from the index built using a combination of unstructured dialogue state and structured user/system utterances. We explore a simple pipeline resulting in a retrieval-guided generation approach for training a DST model. Experiments on different retrieval methods for augmentation show that neural retrieval augmentation is the best performing retrieval method for DST. Our evaluations on the large-scale MultiWOZ dataset show that our model outperforms the baseline approaches.
2018
Self-Learning Architecture for Natural Language Generation
Hyungtak Choi
|
Siddarth K.M.
|
Haehun Yang
|
Heesik Jeon
|
Inchul Hwang
|
Jihie Kim
Proceedings of the 11th International Conference on Natural Language Generation
In this paper, we propose a self-learning architecture for generating natural language templates for conversational assistants. Generating templates to cover all the combinations of slots in an intent is time consuming and labor-intensive. We examine three different models based on our proposed architecture - Rule-based model, Sequence-to-Sequence (Seq2Seq) model and Semantically Conditioned LSTM (SC-LSTM) model for the IoT domain - to reduce the human labor required for template generation. We demonstrate the feasibility of template generation for the IoT domain using our self-learning architecture. In both automatic and human evaluation, the self-learning architecture outperforms previous works trained with a fully human-labeled dataset. This is promising for commercial conversational assistant solutions.
Search
Co-authors
- Hyungtak Choi 2
- Siddarth K.M. 1
- Heesik Jeon 1
- Inchul Hwang 1
- Jihie Kim 1
- show all...