Understanding Spatial Relations through Multiple Modalities

Soham Dan, Hangfeng He, Dan Roth


Abstract
Recognizing spatial relations and reasoning about them is essential in multiple applications including navigation, direction giving and human-computer interaction in general. Spatial relations between objects can either be explicit – expressed as spatial prepositions, or implicit – expressed by spatial verbs such as moving, walking, shifting, etc. Both these, but implicit relations in particular, require significant common sense understanding. In this paper, we introduce the task of inferring implicit and explicit spatial relations between two entities in an image. We design a model that uses both textual and visual information to predict the spatial relations, making use of both positional and size information of objects and image embeddings. We contrast our spatial model with powerful language models and show how our modeling complements the power of these, improving prediction accuracy and coverage and facilitates dealing with unseen subjects, objects and relations.
Anthology ID:
2020.lrec-1.288
Volume:
Proceedings of the Twelfth Language Resources and Evaluation Conference
Month:
May
Year:
2020
Address:
Marseille, France
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
2368–2372
Language:
English
URL:
https://aclanthology.org/2020.lrec-1.288
DOI:
Bibkey:
Cite (ACL):
Soham Dan, Hangfeng He, and Dan Roth. 2020. Understanding Spatial Relations through Multiple Modalities. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 2368–2372, Marseille, France. European Language Resources Association.
Cite (Informal):
Understanding Spatial Relations through Multiple Modalities (Dan et al., LREC 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2020.lrec-1.288.pdf
Data
Visual Genome