Abstract
Emotion recognition in conversation is important for an empathetic dialogue system to understand the user’s emotion and then generate appropriate emotional responses. However, most previous researches focus on modeling conversational contexts primarily based on the textual modality or simply utilizing multimodal information through feature concatenation. In order to exploit multimodal information and contextual information more effectively, we propose a multimodal directed acyclic graph (MMDAG) network by injecting information flows inside modality and across modalities into the DAG architecture. Experiments on IEMOCAP and MELD show that our model outperforms other state-of-the-art models. Comparative studies validate the effectiveness of the proposed modality fusion method.- Anthology ID:
- 2022.lrec-1.733
- Volume:
- Proceedings of the Thirteenth Language Resources and Evaluation Conference
- Month:
- June
- Year:
- 2022
- Address:
- Marseille, France
- Venue:
- LREC
- SIG:
- Publisher:
- European Language Resources Association
- Note:
- Pages:
- 6802–6807
- Language:
- URL:
- https://aclanthology.org/2022.lrec-1.733
- DOI:
- Cite (ACL):
- Shuo Xu, Yuxiang Jia, Changyong Niu, and Hongying Zan. 2022. MMDAG: Multimodal Directed Acyclic Graph Network for Emotion Recognition in Conversation. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 6802–6807, Marseille, France. European Language Resources Association.
- Cite (Informal):
- MMDAG: Multimodal Directed Acyclic Graph Network for Emotion Recognition in Conversation (Xu et al., LREC 2022)
- PDF:
- https://preview.aclanthology.org/starsem-semeval-split/2022.lrec-1.733.pdf
- Data
- IEMOCAP, MELD