EmoCaps: Emotion Capsule based Model for Conversational Emotion Recognition

Zaijing Li, Fengxiao Tang, Ming Zhao, Yusen Zhu


Abstract
Emotion recognition in conversation (ERC) aims to analyze the speaker’s state and identify their emotion in the conversation. Recent works in ERC focus on context modeling but ignore the representation of contextual emotional tendency. In order to extract multi-modal information and the emotional tendency of the utterance effectively, we propose a new structure named Emoformer to extract multi-modal emotion vectors from different modalities and fuse them with sentence vector to be an emotion capsule. Furthermore, we design an end-to-end ERC model called EmoCaps, which extracts emotion vectors through the Emoformer structure and obtain the emotion classification results from a context analysis model. Through the experiments with two benchmark datasets, our model shows better performance than the existing state-of-the-art models.
Anthology ID:
2022.findings-acl.126
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1610–1618
Language:
URL:
https://aclanthology.org/2022.findings-acl.126
DOI:
10.18653/v1/2022.findings-acl.126
Bibkey:
Cite (ACL):
Zaijing Li, Fengxiao Tang, Ming Zhao, and Yusen Zhu. 2022. EmoCaps: Emotion Capsule based Model for Conversational Emotion Recognition. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1610–1618, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
EmoCaps: Emotion Capsule based Model for Conversational Emotion Recognition (Li et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/2022.findings-acl.126.pdf
Software:
 2022.findings-acl.126.software.zip
Data
IEMOCAPMELD