Extracting Possessions from Social Media: Images Complement Language

Dhivya Chinnappa, Srikala Murugan, Eduardo Blanco


Abstract
This paper describes a new dataset and experiments to determine whether authors of tweets possess the objects they tweet about. We work with 5,000 tweets and show that both humans and neural networks benefit from images in addition to text. We also introduce a simple yet effective strategy to incorporate visual information into any neural network beyond weights from pretrained networks. Specifically, we consider the tags identified in an image as an additional textual input, and leverage pretrained word embeddings as usually done with regular text. Experimental results show this novel strategy is beneficial.
Anthology ID:
D19-1061
Volume:
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Kentaro Inui, Jing Jiang, Vincent Ng, Xiaojun Wan
Venues:
EMNLP | IJCNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
663–672
Language:
URL:
https://aclanthology.org/D19-1061
DOI:
10.18653/v1/D19-1061
Bibkey:
Cite (ACL):
Dhivya Chinnappa, Srikala Murugan, and Eduardo Blanco. 2019. Extracting Possessions from Social Media: Images Complement Language. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 663–672, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Extracting Possessions from Social Media: Images Complement Language (Chinnappa et al., EMNLP-IJCNLP 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/D19-1061.pdf