Xiaohuan Cao
2022
Sentiment Word Aware Multimodal Refinement for Multimodal Sentiment Analysis with ASR Errors
Yang Wu
|
Yanyan Zhao
|
Hao Yang
|
Song Chen
|
Bing Qin
|
Xiaohuan Cao
|
Wenting Zhao
Findings of the Association for Computational Linguistics: ACL 2022
Multimodal sentiment analysis has attracted increasing attention and lots of models have been proposed. However, the performance of the state-of-the-art models decreases sharply when they are deployed in the real world. We find that the main reason is that real-world applications can only access the text outputs by the automatic speech recognition (ASR) models, which may be with errors because of the limitation of model capacity. Through further analysis of the ASR outputs, we find that in some cases the sentiment words, the key sentiment elements in the textual modality, are recognized as other words, which makes the sentiment of the text change and hurts the performance of multimodal sentiment analysis models directly. To address this problem, we propose the sentiment word aware multimodal refinement model (SWRM), which can dynamically refine the erroneous sentiment words by leveraging multimodal sentiment clues. Specifically, we first use the sentiment word position detection module to obtain the most possible position of the sentiment word in the text and then utilize the multimodal sentiment word refinement module to dynamically refine the sentiment word embeddings. The refined embeddings are taken as the textual inputs of the multimodal feature fusion module to predict the sentiment labels. We conduct extensive experiments on the real-world datasets including MOSI-Speechbrain, MOSI-IBM, and MOSI-iFlytek and the results demonstrate the effectiveness of our model, which surpasses the current state-of-the-art models on three datasets. Furthermore, our approach can be adapted for other multimodal feature fusion models easily.
2021
Enhancing Multiple-choice Machine Reading Comprehension by Punishing Illogical Interpretations
Yiming Ju
|
Yuanzhe Zhang
|
Zhixing Tian
|
Kang Liu
|
Xiaohuan Cao
|
Wenting Zhao
|
Jinlong Li
|
Jun Zhao
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Machine Reading Comprehension (MRC), which requires a machine to answer questions given the relevant documents, is an important way to test machines’ ability to understand human language. Multiple-choice MRC is one of the most studied tasks in MRC due to the convenience of evaluation and the flexibility of answer format. Post-hoc interpretation aims to explain a trained model and reveal how the model arrives at the prediction. One of the most important interpretation forms is to attribute model decisions to input features. Based on post-hoc interpretation methods, we assess attributions of paragraphs in multiple-choice MRC and improve the model by punishing the illogical attributions. Our method can improve model performance without any external information and model structure change. Furthermore, we also analyze how and why such a self-training method works.
Search
Co-authors
- Bing Qin (秦兵) 1
- Hao Yang (杨浩) 1
- Jinlong Li 1
- Jun Zhao (军 赵) 1
- Kang Liu (刘康) 1
- show all...