Corpus of Multimodal Interaction for Collaborative Planning

Miltiadis Marios Katsakioris, Helen Hastie, Ioannis Konstas, Atanas Laskov


Abstract
As autonomous systems become more commonplace, we need a way to easily and naturally communicate to them our goals and collaboratively come up with a plan on how to achieve these goals. To this end, we conducted a Wizard of Oz study to gather data and investigate the way operators would collaboratively make plans via a conversational ‘planning assistant’ for remote autonomous systems. We present here a corpus of 22 dialogs from expert operators, which can be used to train such a system. Data analysis shows that multimodality is key to successful interaction, measured both quantitatively and qualitatively via user feedback.
Anthology ID:
W19-1601
Volume:
Proceedings of the Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Archna Bhatia, Yonatan Bisk, Parisa Kordjamshidi, Jesse Thomason
Venue:
RoboNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–6
Language:
URL:
https://aclanthology.org/W19-1601
DOI:
10.18653/v1/W19-1601
Bibkey:
Cite (ACL):
Miltiadis Marios Katsakioris, Helen Hastie, Ioannis Konstas, and Atanas Laskov. 2019. Corpus of Multimodal Interaction for Collaborative Planning. In Proceedings of the Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP), pages 1–6, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
Corpus of Multimodal Interaction for Collaborative Planning (Katsakioris et al., RoboNLP 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/W19-1601.pdf