Multimodal Emoji Prediction

Francesco Barbieri, Miguel Ballesteros, Francesco Ronzano, Horacio Saggion


Abstract
Emojis are small images that are commonly included in social media text messages. The combination of visual and textual content in the same message builds up a modern way of communication, that automatic systems are not used to deal with. In this paper we extend recent advances in emoji prediction by putting forward a multimodal approach that is able to predict emojis in Instagram posts. Instagram posts are composed of pictures together with texts which sometimes include emojis. We show that these emojis can be predicted by using the text, but also using the picture. Our main finding is that incorporating the two synergistic modalities, in a combined model, improves accuracy in an emoji prediction task. This result demonstrates that these two modalities (text and images) encode different information on the use of emojis and therefore can complement each other.
Anthology ID:
N18-2107
Volume:
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)
Month:
June
Year:
2018
Address:
New Orleans, Louisiana
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
679–686
Language:
URL:
https://aclanthology.org/N18-2107
DOI:
10.18653/v1/N18-2107
Bibkey:
Cite (ACL):
Francesco Barbieri, Miguel Ballesteros, Francesco Ronzano, and Horacio Saggion. 2018. Multimodal Emoji Prediction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 679–686, New Orleans, Louisiana. Association for Computational Linguistics.
Cite (Informal):
Multimodal Emoji Prediction (Barbieri et al., NAACL 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/N18-2107.pdf
Video:
 http://vimeo.com/277671532
Data
Multimodal Emoji Prediction