Constructing Multi-Modal Dialogue Dataset by Replacing Text with Semantically Relevant Images

Nyoungwoo Lee, Suwon Shin, Jaegul Choo, Ho-Jin Choi, Sung-Hyon Myaeng


Abstract
In multi-modal dialogue systems, it is important to allow the use of images as part of a multi-turn conversation. Training such dialogue systems generally requires a large-scale dataset consisting of multi-turn dialogues that involve images, but such datasets rarely exist. In response, this paper proposes a 45k multi-modal dialogue dataset created with minimal human intervention. Our method to create such a dataset consists of (1) preparing and pre-processing text dialogue datasets, (2) creating image-mixed dialogues by using a text-to-image replacement technique, and (3) employing a contextual-similarity-based filtering step to ensure the contextual coherence of the dataset. To evaluate the validity of our dataset, we devise a simple retrieval model for dialogue sentence prediction tasks. Automatic metrics and human evaluation results on such tasks show that our dataset can be effectively used as training data for multi-modal dialogue systems which require an understanding of images and text in a context-aware manner. Our dataset and generation code is available at https://github.com/shh1574/multi-modal-dialogue-dataset.
Anthology ID:
2021.acl-short.113
Volume:
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Month:
August
Year:
2021
Address:
Online
Editors:
Chengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
Venues:
ACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
897–906
Language:
URL:
https://aclanthology.org/2021.acl-short.113
DOI:
10.18653/v1/2021.acl-short.113
Bibkey:
Cite (ACL):
Nyoungwoo Lee, Suwon Shin, Jaegul Choo, Ho-Jin Choi, and Sung-Hyon Myaeng. 2021. Constructing Multi-Modal Dialogue Dataset by Replacing Text with Semantically Relevant Images. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 897–906, Online. Association for Computational Linguistics.
Cite (Informal):
Constructing Multi-Modal Dialogue Dataset by Replacing Text with Semantically Relevant Images (Lee et al., ACL-IJCNLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/2021.acl-short.113.pdf
Video:
 https://preview.aclanthology.org/naacl-24-ws-corrections/2021.acl-short.113.mp4
Code
 shh1574/multi-modal-dialogue-dataset