Athanasia-Lida Dimou


2016

pdf bib
Multimodal Resources for Human-Robot Communication Modelling
Stavroula–Evita Fotinea | Eleni Efthimiou | Maria Koutsombogera | Athanasia-Lida Dimou | Theodore Goulas | Kyriaki Vasilaki
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

This paper reports on work related to the modelling of Human-Robot Communication on the basis of multimodal and multisensory human behaviour analysis. A primary focus in this framework of analysis is the definition of semantics of human actions in interaction, their capture and their representation in terms of behavioural patterns that, in turn, feed a multimodal human-robot communication system. Semantic analysis encompasses both oral and sign languages, as well as both verbal and non-verbal communicative signals to achieve an effective, natural interaction between elderly users with slight walking and cognitive inability and an assistive robotic platform.