Saim Shin

Also published as: Sa-Im Shin


2021

pdf bib
A Model of Cross-Lingual Knowledge-Grounded Response Generation for Open-Domain Dialogue Systems
San Kim | Jin Yea Jang | Minyoung Jung | Saim Shin
Findings of the Association for Computational Linguistics: EMNLP 2021

Research on open-domain dialogue systems that allow free topics is challenging in the field of natural language processing (NLP). The performance of the dialogue system has been improved recently by the method utilizing dialogue-related knowledge; however, non-English dialogue systems suffer from reproducing the performance of English dialogue systems because securing knowledge in the same language with the dialogue system is relatively difficult. Through experiments with a Korean dialogue system, this paper proves that the performance of a non-English dialogue system can be improved by utilizing English knowledge, highlighting the system uses cross-lingual knowledge. For the experiments, we 1) constructed a Korean version of the Wizard of Wikipedia dataset, 2) built Korean-English T5 (KE-T5), a language model pre-trained with Korean and English corpus, and 3) developed a knowledge-grounded Korean dialogue model based on KE-T5. We observed the performance improvement in the open-domain Korean dialogue model even only English knowledge was given. The experimental results showed that the knowledge inherent in cross-lingual language models can be helpful for generating responses in open dialogue systems.

pdf bib
BPM_MT: Enhanced Backchannel Prediction Model using Multi-Task Learning
Jin Yea Jang | San Kim | Minyoung Jung | Saim Shin | Gahgene Gweon
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Backchannel (BC), a short reaction signal of a listener to a speaker’s utterances, helps to improve the quality of the conversation. Several studies have been conducted to predict BC in conversation; however, the utilization of advanced natural language processing techniques using lexical information presented in the utterances of a speaker has been less considered. To address this limitation, we present a BC prediction model called BPM_MT (Backchannel prediction model with multitask learning), which utilizes KoBERT, a pre-trained language model. The BPM_MT simultaneously carries out two tasks at learning: 1) BC category prediction using acoustic and lexical features, and 2) sentiment score prediction based on sentiment cues. BPM_MT exhibited 14.24% performance improvement compared to the existing baseline in the four BC categories: continuer, understanding, empathic response, and No BC. In particular, for empathic response category, a performance improvement of 17.14% was achieved.

2004

pdf bib
Semiautomatic Extension of CoreNet using a Bootstrapping Mechanism on Corpus-based Co-occurrences
Chris Biemann | Sa-Im Shin | Key-Sun Choi
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics

pdf bib
Automatic clustering of collocation for detecting practical sense boundary
Saim Shin | Key-Sun Choi
Proceedings of the ACL Interactive Poster and Demonstration Sessions

2002

pdf bib
Word Sense Disambiguation with Information Retrieval Technique
Jong-Hoon Oh | Saim Shin | Yong-Seok Choi | Key-Sun Choi
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)