DNN Multimodal Fusion Techniques for Predicting Video Sentiment

Jennifer Williams, Ramona Comanescu, Oana Radu, Leimin Tian


Abstract
We present our work on sentiment prediction using the benchmark MOSI dataset from the CMU-MultimodalDataSDK. Previous work on multimodal sentiment analysis have been focused on input-level feature fusion or decision-level fusion for multimodal fusion. Here, we propose an intermediate-level feature fusion, which merges weights from each modality (audio, video, and text) during training with subsequent additional training. Moreover, we tested principle component analysis (PCA) for feature selection. We found that applying PCA increases unimodal performance, and multimodal fusion outperforms unimodal models. Our experiments show that our proposed intermediate-level feature fusion outperforms other fusion techniques, and it achieves the best performance with an overall binary accuracy of 74.0% on video+text modalities. Our work also improves feature selection for unimodal sentiment analysis, while proposing a novel and effective multimodal fusion architecture for this task.
Anthology ID:
W18-3309
Volume:
Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
64–72
Language:
URL:
https://aclanthology.org/W18-3309
DOI:
10.18653/v1/W18-3309
Bibkey:
Cite (ACL):
Jennifer Williams, Ramona Comanescu, Oana Radu, and Leimin Tian. 2018. DNN Multimodal Fusion Techniques for Predicting Video Sentiment. In Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML), pages 64–72, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
DNN Multimodal Fusion Techniques for Predicting Video Sentiment (Williams et al., ACL 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/W18-3309.pdf
Data
Multimodal Opinionlevel Sentiment Intensity