Multimodal Affective Analysis Using Hierarchical Attention Strategy with Word-Level Alignment
Yue Gu, Kangning Yang, Shiyu Fu, Shuhong Chen, Xinyu Li, Ivan Marsic
Abstract
Multimodal affective computing, learning to recognize and interpret human affect and subjective information from multiple data sources, is still a challenge because: (i) it is hard to extract informative features to represent human affects from heterogeneous inputs; (ii) current fusion strategies only fuse different modalities at abstract levels, ignoring time-dependent interactions between modalities. Addressing such issues, we introduce a hierarchical multimodal architecture with attention and word-level fusion to classify utterance-level sentiment and emotion from text and audio data. Our introduced model outperforms state-of-the-art approaches on published datasets, and we demonstrate that our model is able to visualize and interpret synchronized attention over modalities.- Anthology ID:
- P18-1207
- Volume:
- Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
- Month:
- July
- Year:
- 2018
- Address:
- Melbourne, Australia
- Editors:
- Iryna Gurevych, Yusuke Miyao
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2225–2235
- Language:
- URL:
- https://preview.aclanthology.org/build-pipeline-with-new-library/P18-1207/
- DOI:
- 10.18653/v1/P18-1207
- Cite (ACL):
- Yue Gu, Kangning Yang, Shiyu Fu, Shuhong Chen, Xinyu Li, and Ivan Marsic. 2018. Multimodal Affective Analysis Using Hierarchical Attention Strategy with Word-Level Alignment. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2225–2235, Melbourne, Australia. Association for Computational Linguistics.
- Cite (Informal):
- Multimodal Affective Analysis Using Hierarchical Attention Strategy with Word-Level Alignment (Gu et al., ACL 2018)
- PDF:
- https://preview.aclanthology.org/build-pipeline-with-new-library/P18-1207.pdf
- Data
- IEMOCAP, Multimodal Opinionlevel Sentiment Intensity