The VoxWorld Platform for Multimodal Embodied Agents

Nikhil Krishnaswamy, William Pickard, Brittany Cates, Nathaniel Blanchard, James Pustejovsky


Abstract
We present a five-year retrospective on the development of the VoxWorld platform, first introduced as a multimodal platform for modeling motion language, that has evolved into a platform for rapidly building and deploying embodied agents with contextual and situational awareness, capable of interacting with humans in multiple modalities, and exploring their environments. In particular, we discuss the evolution from the theoretical underpinnings of the VoxML modeling language to a platform that accommodates both neural and symbolic inputs to build agents capable of multimodal interaction and hybrid reasoning. We focus on three distinct agent implementations and the functionality needed to accommodate all of them: Diana, a virtual collaborative agent; Kirby, a mobile robot; and BabyBAW, an agent who self-guides its own exploration of the world.
Anthology ID:
2022.lrec-1.164
Volume:
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Month:
June
Year:
2022
Address:
Marseille, France
Venue:
LREC
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
1529–1541
Language:
URL:
https://aclanthology.org/2022.lrec-1.164
DOI:
Bibkey:
Cite (ACL):
Nikhil Krishnaswamy, William Pickard, Brittany Cates, Nathaniel Blanchard, and James Pustejovsky. 2022. The VoxWorld Platform for Multimodal Embodied Agents. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 1529–1541, Marseille, France. European Language Resources Association.
Cite (Informal):
The VoxWorld Platform for Multimodal Embodied Agents (Krishnaswamy et al., LREC 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2022.lrec-1.164.pdf
Data
OpenAI Gym