BPM_MT: Enhanced Backchannel Prediction Model using Multi-Task Learning

Jin Yea Jang, San Kim, Minyoung Jung, Saim Shin, Gahgene Gweon


Abstract
Backchannel (BC), a short reaction signal of a listener to a speaker’s utterances, helps to improve the quality of the conversation. Several studies have been conducted to predict BC in conversation; however, the utilization of advanced natural language processing techniques using lexical information presented in the utterances of a speaker has been less considered. To address this limitation, we present a BC prediction model called BPM_MT (Backchannel prediction model with multitask learning), which utilizes KoBERT, a pre-trained language model. The BPM_MT simultaneously carries out two tasks at learning: 1) BC category prediction using acoustic and lexical features, and 2) sentiment score prediction based on sentiment cues. BPM_MT exhibited 14.24% performance improvement compared to the existing baseline in the four BC categories: continuer, understanding, empathic response, and No BC. In particular, for empathic response category, a performance improvement of 17.14% was achieved.
Anthology ID:
2021.emnlp-main.277
Volume:
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3447–3452
Language:
URL:
https://aclanthology.org/2021.emnlp-main.277
DOI:
10.18653/v1/2021.emnlp-main.277
Bibkey:
Cite (ACL):
Jin Yea Jang, San Kim, Minyoung Jung, Saim Shin, and Gahgene Gweon. 2021. BPM_MT: Enhanced Backchannel Prediction Model using Multi-Task Learning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3447–3452, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
BPM_MT: Enhanced Backchannel Prediction Model using Multi-Task Learning (Jang et al., EMNLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/2021.emnlp-main.277.pdf
Video:
 https://preview.aclanthology.org/add_acl24_videos/2021.emnlp-main.277.mp4