SPLIT: Stance and Persuasion Prediction with Multi-modal on Image and Textual Information

Jing Zhang, Shaojun Yu, Xuan Li, Jia Geng, Zhiyuan Zheng, Joyce Ho


Abstract
Persuasiveness is a prominent personality trait that measures the extent to which a speaker can impact the beliefs, attitudes, intentions, motivations, and actions of their audience. The ImageArg task is a featured challenge at the 10th ArgMining Workshop during EMNLP 2023, focusing on harnessing the potential of the ImageArg dataset to advance techniques in multimodal persuasion. In this study, we investigate the utilization of dual-modality datasets and evaluate three distinct multi-modality models. By enhancing multi-modality datasets, we demonstrate both the advantages and constraints of cutting-edge models.
Anthology ID:
2023.argmining-1.19
Volume:
Proceedings of the 10th Workshop on Argument Mining
Month:
December
Year:
2023
Address:
Singapore
Editors:
Milad Alshomary, Chung-Chi Chen, Smaranda Muresan, Joonsuk Park, Julia Romberg
Venues:
ArgMining | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
175–180
Language:
URL:
https://aclanthology.org/2023.argmining-1.19
DOI:
10.18653/v1/2023.argmining-1.19
Bibkey:
Cite (ACL):
Jing Zhang, Shaojun Yu, Xuan Li, Jia Geng, Zhiyuan Zheng, and Joyce Ho. 2023. SPLIT: Stance and Persuasion Prediction with Multi-modal on Image and Textual Information. In Proceedings of the 10th Workshop on Argument Mining, pages 175–180, Singapore. Association for Computational Linguistics.
Cite (Informal):
SPLIT: Stance and Persuasion Prediction with Multi-modal on Image and Textual Information (Zhang et al., ArgMining-WS 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2023.argmining-1.19.pdf