Catharine Oertel


2018

pdf
A Multimodal Corpus for Mutual Gaze and Joint Attention in Multiparty Situated Interaction
Dimosthenis Kontogiorgos | Vanya Avramova | Simon Alexanderson | Patrik Jonell | Catharine Oertel | Jonas Beskow | Gabriel Skantze | Joakim Gustafson
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf
Crowdsourced Multimodal Corpora Collection Tool
Patrik Jonell | Catharine Oertel | Dimosthenis Kontogiorgos | Jonas Beskow | Joakim Gustafson
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf
FARMI: A FrAmework for Recording Multi-Modal Interactions
Patrik Jonell | Mattias Bystedt | Per Fallgren | Dimosthenis Kontogiorgos | José Lopes | Zofia Malisz | Samuel Mascarenhas | Catharine Oertel | Eran Raveh | Todd Shore
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2014

pdf
The Tutorbot Corpus — A Corpus for Studying Tutoring Behaviour in Multiparty Face-to-Face Spoken Dialogue
Maria Koutsombogera | Samer Al Moubayed | Bajibabu Bollepalli | Ahmed Hussen Abdelaziz | Martin Johansson | José David Aguas Lopes | Jekaterina Novikova | Catharine Oertel | Kalin Stefanov | Gül Varol
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

This paper describes a novel experimental setup exploiting state-of-the-art capture equipment to collect a multimodally rich game-solving collaborative multiparty dialogue corpus. The corpus is targeted and designed towards the development of a dialogue system platform to explore verbal and nonverbal tutoring strategies in multiparty spoken interactions. The dialogue task is centered on two participants involved in a dialogue aiming to solve a card-ordering game. The participants were paired into teams based on their degree of extraversion as resulted from a personality test. With the participants sits a tutor that helps them perform the task, organizes and balances their interaction and whose behavior was assessed by the participants after each interaction. Different multimodal signals captured and auto-synchronized by different audio-visual capture technologies, together with manual annotations of the tutor’s behavior constitute the Tutorbot corpus. This corpus is exploited to build a situated model of the interaction based on the participants’ temporally-changing state of attention, their conversational engagement and verbal dominance, and their correlation with the verbal and visual feedback and conversation regulatory actions generated by the tutor.

2013

pdf
Exploring the effects of gaze and pauses in situated human-robot interaction
Gabriel Skantze | Anna Hjalmarsson | Catharine Oertel
Proceedings of the SIGDIAL 2013 Conference