Abstract
Much existing work in text-to-scene generation focuses on generating static scenes. By introducing a focus on motion verbs, we integrate dynamic semantics into a rich formal model of events to generate animations in real time that correlate with human conceptions of the event described. This paper presents a working system that generates these animated scenes over a test set, discussing challenges encountered and describing the solutions implemented.- Anthology ID:
- C16-2012
- Volume:
- Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations
- Month:
- December
- Year:
- 2016
- Address:
- Osaka, Japan
- Editor:
- Hideo Watanabe
- Venue:
- COLING
- SIG:
- Publisher:
- The COLING 2016 Organizing Committee
- Note:
- Pages:
- 54–58
- Language:
- URL:
- https://aclanthology.org/C16-2012
- DOI:
- Cite (ACL):
- Nikhil Krishnaswamy and James Pustejovsky. 2016. VoxSim: A Visual Platform for Modeling Motion Language. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations, pages 54–58, Osaka, Japan. The COLING 2016 Organizing Committee.
- Cite (Informal):
- VoxSim: A Visual Platform for Modeling Motion Language (Krishnaswamy & Pustejovsky, COLING 2016)
- PDF:
- https://preview.aclanthology.org/landing_page/C16-2012.pdf