Efficient Low-rank Multimodal Fusion With Modality-Specific Factors

Zhun Liu, Ying Shen, Varun Bharadhwaj Lakshminarasimhan, Paul Pu Liang, AmirAli Bagher Zadeh, Louis-Philippe Morency


Abstract
Multimodal research is an emerging field of artificial intelligence, and one of the main research problems in this field is multimodal fusion. The fusion of multimodal data is the process of integrating multiple unimodal representations into one compact multimodal representation. Previous research in this field has exploited the expressiveness of tensors for multimodal representation. However, these methods often suffer from exponential increase in dimensions and in computational complexity introduced by transformation of input into tensor. In this paper, we propose the Low-rank Multimodal Fusion method, which performs multimodal fusion using low-rank tensors to improve efficiency. We evaluate our model on three different tasks: multimodal sentiment analysis, speaker trait analysis, and emotion recognition. Our model achieves competitive results on all these tasks while drastically reducing computational complexity. Additional experiments also show that our model can perform robustly for a wide range of low-rank settings, and is indeed much more efficient in both training and inference compared to other methods that utilize tensor representations.
Anthology ID:
P18-1209
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2247–2256
Language:
URL:
https://aclanthology.org/P18-1209
DOI:
10.18653/v1/P18-1209
Bibkey:
Cite (ACL):
Zhun Liu, Ying Shen, Varun Bharadhwaj Lakshminarasimhan, Paul Pu Liang, AmirAli Bagher Zadeh, and Louis-Philippe Morency. 2018. Efficient Low-rank Multimodal Fusion With Modality-Specific Factors. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2247–2256, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Efficient Low-rank Multimodal Fusion With Modality-Specific Factors (Liu et al., ACL 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/P18-1209.pdf
Presentation:
 P18-1209.Presentation.pdf
Video:
 https://preview.aclanthology.org/landing_page/P18-1209.mp4
Code
 Justin1904/Low-rank-Multimodal-Fusion +  additional community code
Data
IEMOCAP