ImageArg: A Multi-modal Tweet Dataset for Image Persuasiveness Mining

Zhexiong Liu, Meiqi Guo, Yue Dai, Diane Litman


Abstract
The growing interest in developing corpora of persuasive texts has promoted applications in automated systems, e.g., debating and essay scoring systems; however, there is little prior work mining image persuasiveness from an argumentative perspective. To expand persuasiveness mining into a multi-modal realm, we present a multi-modal dataset, ImageArg, consisting of annotations of image persuasiveness in tweets. The annotations are based on a persuasion taxonomy we developed to explore image functionalities and the means of persuasion. We benchmark image persuasiveness tasks on ImageArg using widely-used multi-modal learning methods. The experimental results show that our dataset offers a useful resource for this rich and challenging topic, and there is ample room for modeling improvement.
Anthology ID:
2022.argmining-1.1
Volume:
Proceedings of the 9th Workshop on Argument Mining
Month:
October
Year:
2022
Address:
Online and in Gyeongju, Republic of Korea
Venue:
ArgMining
SIG:
Publisher:
International Conference on Computational Linguistics
Note:
Pages:
1–18
Language:
URL:
https://aclanthology.org/2022.argmining-1.1
DOI:
Bibkey:
Cite (ACL):
Zhexiong Liu, Meiqi Guo, Yue Dai, and Diane Litman. 2022. ImageArg: A Multi-modal Tweet Dataset for Image Persuasiveness Mining. In Proceedings of the 9th Workshop on Argument Mining, pages 1–18, Online and in Gyeongju, Republic of Korea. International Conference on Computational Linguistics.
Cite (Informal):
ImageArg: A Multi-modal Tweet Dataset for Image Persuasiveness Mining (Liu et al., ArgMining 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2022.argmining-1.1.pdf
Code
 meiqiguo/argmining2022-imagearg