Abstract
Emotion Recognition in Conversations (ERC) aims to predict the emotional state of speakers in conversations, which is essentially a text classification task. Unlike the sentence-level text classification problem, the available supervised data for the ERC task is limited, which potentially prevents the models from playing their maximum effect. In this paper, we propose a novel approach to leverage unsupervised conversation data, which is more accessible. Specifically, we propose the Conversation Completion (ConvCom) task, which attempts to select the correct answer from candidate answers to fill a masked utterance in a conversation. Then, we Pre-train a basic COntext-Dependent Encoder (Pre-CODE) on the ConvCom task. Finally, we fine-tune the Pre-CODE on the datasets of ERC. Experimental results demonstrate that pre-training on unsupervised data achieves significant improvement of performance on the ERC datasets, particularly on the minority emotion classes.- Anthology ID:
- 2020.findings-emnlp.435
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2020
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Editors:
- Trevor Cohn, Yulan He, Yang Liu
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4839–4846
- Language:
- URL:
- https://aclanthology.org/2020.findings-emnlp.435
- DOI:
- 10.18653/v1/2020.findings-emnlp.435
- Cite (ACL):
- Wenxiang Jiao, Michael Lyu, and Irwin King. 2020. Exploiting Unsupervised Data for Emotion Recognition in Conversations. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4839–4846, Online. Association for Computational Linguistics.
- Cite (Informal):
- Exploiting Unsupervised Data for Emotion Recognition in Conversations (Jiao et al., Findings 2020)
- PDF:
- https://preview.aclanthology.org/add_acl24_videos/2020.findings-emnlp.435.pdf
- Code
- wxjiao/Pre-CODE
- Data
- EmotionLines, IEMOCAP