Smiles are a fundamental facial expression for successful human-agent communication. The growing number of publications in this domain presents an opportunity for future research and design to be informed by a scoping review of the extant literature. This semi-automated review expedites the first steps toward the mapping of Virtual Human (VH) smile research. This paper contributes an overview of the status quo of VH smile research, identifies research streams through cluster analysis, identifies prolific authors in the field, and provides evidence that a full scoping review is needed to synthesize the findings in the expanding domain of VH smile research. To enable collaboration, we provide full access to the refined VH smile dataset, key word and author word clouds, as well as interactive evidence maps.
This paper introduces a multimodal discussion corpus for the study into head movement and turn-taking patterns in debates. Given that participants either acted alone or in a pair, cooperation and competition and their nonverbal correlates can be analyzed. In addition to the video and audio of the recordings, the corpus contains automatically estimated head movements, and manual annotations of who is speaking and who is looking where. The corpus consists of over 2 hours of debates, in 6 groups with 18 participants in total. We describe the recording setup and present initial analyses of the recorded data. We found that the person who acted as single debater speaks more and also receives more attention compared to the other debaters, also when corrected for the time speaking. We also found that a single debater was more likely to speak after a team debater. Future work will be aimed at further analysis of the relation between speaking and looking patterns, the outcome of the debate and perceived dominance of the debaters.
We present the results of two trials testing procedures for the annotation of emotion and mental state of the AMI corpus. The first procedure is an adaptation of the FeelTrace method, focusing on a continuous labelling of emotion dimensions. The second method is centered around more discrete labeling of segments using categorical labels. The results reported are promising for this hard task.