iParaphrasing: Extracting Visually Grounded Paraphrases via an Image

Chenhui Chu, Mayu Otani, Yuta Nakashima


Abstract
A paraphrase is a restatement of the meaning of a text in other words. Paraphrases have been studied to enhance the performance of many natural language processing tasks. In this paper, we propose a novel task iParaphrasing to extract visually grounded paraphrases (VGPs), which are different phrasal expressions describing the same visual concept in an image. These extracted VGPs have the potential to improve language and image multimodal tasks such as visual question answering and image captioning. How to model the similarity between VGPs is the key of iParaphrasing. We apply various existing methods as well as propose a novel neural network-based method with image attention, and report the results of the first attempt toward iParaphrasing.
Anthology ID:
C18-1295
Volume:
Proceedings of the 27th International Conference on Computational Linguistics
Month:
August
Year:
2018
Address:
Santa Fe, New Mexico, USA
Editors:
Emily M. Bender, Leon Derczynski, Pierre Isabelle
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3479–3492
Language:
URL:
https://aclanthology.org/C18-1295
DOI:
Bibkey:
Cite (ACL):
Chenhui Chu, Mayu Otani, and Yuta Nakashima. 2018. iParaphrasing: Extracting Visually Grounded Paraphrases via an Image. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3479–3492, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
iParaphrasing: Extracting Visually Grounded Paraphrases via an Image (Chu et al., COLING 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/C18-1295.pdf
Code
 ids-cv/coling_iparaphrasing
Data
Flickr30k