Catherine Pelachaud


2020

pdf bib
Multimodal Analysis of Cohesion in Multi-party Interactions
Reshmashree Bangalore Kantharaju | Caroline Langlet | Mukesh Barange | Chloé Clavel | Catherine Pelachaud
Proceedings of the 12th Language Resources and Evaluation Conference

Group cohesion is an emergent phenomenon that describes the tendency of the group members’ shared commitment to group tasks and the interpersonal attraction among them. This paper presents a multimodal analysis of group cohesion using a corpus of multi-party interactions. We utilize 16 two-minute segments annotated with cohesion from the AMI corpus. We define three layers of modalities: non-verbal social cues, dialogue acts and interruptions. The initial analysis is performed at the individual level and later, we combine the different modalities to observe their impact on perceived level of cohesion. Results indicate that occurrence of laughter and interruption are higher in high cohesive segments. We also observe that, dialogue acts and head nods did not have an impact on the level of cohesion by itself. However, when combined there was an impact on the perceived level of cohesion. Overall, the analysis shows that multimodal cues are crucial for accurate analysis of group cohesion.

pdf bib
The ISO Standard for Dialogue Act Annotation, Second Edition
Harry Bunt | Volha Petukhova | Emer Gilmartin | Catherine Pelachaud | Alex Fang | Simon Keizer | Laurent Prévot
Proceedings of the 12th Language Resources and Evaluation Conference

ISO standard 24617-2 for dialogue act annotation, established in 2012, has in the past few years been used both in corpus annotation and in the design of components for spoken and multimodal dialogue systems. This has brought some inaccuracies and undesirbale limitations of the standard to light, which are addressed in a proposed second edition. This second edition allows a more accurate annotation of dependence relations and rhetorical relations in dialogue. Following the ISO 24617-4 principles of semantic annotation, and borrowing ideas from EmotionML, a triple-layered plug-in mechanism is introduced which allows dialogue act descriptions to be enriched with information about their semantic content, about accompanying emotions, and other information, and allows the annotation scheme to be customised by adding application-specific dialogue act types.

2018

pdf bib
Downward Compatible Revision of Dialogue Annotation
Harry Bunt | Emer Gilmartin | Simon Keizer | Catherine Pelachaud | Volha Petukhova | Laurent Prévot | Mariët Theune
Proceedings 14th Joint ACL - ISO Workshop on Interoperable Semantic Annotation

pdf bib
From analysis to modeling of engagement as sequences of multimodal behaviors
Soumia Dermouche | Catherine Pelachaud
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2015

pdf bib
Invited Talk: Modeling Socio-Emotional Humanoid Agent
Catherine Pelachaud
Proceedings of the 20th Nordic Conference of Computational Linguistics (NODALIDA 2015)

pdf bib
Topic Transition Strategies for an Information-Giving Agent
Nadine Glas | Catherine Pelachaud
Proceedings of the 15th European Workshop on Natural Language Generation (ENLG)

2014

pdf bib
Mining a multimodal corpus for non-verbal behavior sequences conveying attitudes
Mathieu Chollet | Magalie Ochs | Catherine Pelachaud
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

Interpersonal attitudes are expressed by non-verbal behaviors on a variety of different modalities. The perception of these behaviors is influenced by how they are sequenced with other behaviors from the same person and behaviors from other interactants. In this paper, we present a method for extracting and generating sequences of non-verbal signals expressing interpersonal attitudes. These sequences are used as part of a framework for non-verbal expression with Embodied Conversational Agents that considers different features of non-verbal behavior: global behavior tendencies, interpersonal reactions, sequencing of non-verbal signals, and communicative intentions. Our method uses a sequence mining technique on an annotated multimodal corpus to extract sequences characteristic of different attitudes. New sequences of non-verbal signals are generated using a probabilistic model, and evaluated using the previously mined sequences.

pdf bib
Emilya: Emotional body expression in daily actions database
Nesrine Fourati | Catherine Pelachaud
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

The studies of bodily expression of emotion have been so far mostly focused on body movement patterns associated with emotional expression. Recently, there is an increasing interest on the expression of emotion in daily actions, called also non-emblematic movements (such as walking or knocking at the door). Previous studies were based on database limited to a small range of movement tasks or emotional states. In this paper, we describe our new database of emotional body expression in daily actions, where 11 actors express 8 emotions in 7 actions. We use motion capture technology to record body movements, but we recorded as well synchronized audio-visual data to enlarge the use of the database for different research purposes. We investigate also the matching between the expressed emotions and the perceived ones through a perceptive study. The first results of this study are discussed in this paper.

pdf bib
A model to generate adaptive multimodal job interviews with a virtual recruiter
Zoraida Callejas | Brian Ravenet | Magalie Ochs | Catherine Pelachaud
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

This paper presents an adaptive model of multimodal social behavior for embodied conversational agents. The context of this research is the training of youngsters for job interviews in a serious game where the agent plays the role of a virtual recruiter. With the proposed model the agent is able to adapt its social behavior according to the anxiety level of the trainee and a predefined difficulty level of the game. This information is used to select the objective of the system (to challenge or comfort the user), which is achieved by selecting the complexity of the next question posed and the agent’s verbal and non-verbal behavior. We have carried out a perceptive study that shows that the multimodal behavior of an agent implementing our model successfully conveys the expected social attitudes.

2010

pdf bib
The AVLaughterCycle Database
Jérôme Urbain | Elisabetta Bevacqua | Thierry Dutoit | Alexis Moinet | Radoslaw Niewiadomski | Catherine Pelachaud | Benjamin Picart | Joëlle Tilmanne | Johannes Wagner
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

This paper presents the large audiovisual laughter database recorded as part of the AVLaughterCycle project held during the eNTERFACE’09 Workshop in Genova. 24 subjects participated. The freely available database includes audio signal and video recordings as well as facial motion tracking, thanks to markers placed on the subjects’ face. Annotations of the recordings, focusing on laughter description, are also provided and exhibited in this paper. In total, the corpus contains more than 1000 spontaneous laughs and 27 acted laughs. The laughter utterances are highly variable: the laughter duration ranges from 250ms to 82s and the sounds cover voiced vowels, breath-like expirations, hum-, hiccup- or grunt-like sounds, etc. However, as the subjects had no one to interact with, the database contains very few speech-laughs. Acted laughs tend to be longer than spontaneous ones and are more often composed of voiced vowels. The database can be useful for automatic laughter processing or cognitive science works. For the AVLaughterCycle project, it has served to animate a laughing virtual agent with an output laugh linked to the conversational partner’s input laugh.

2002

pdf bib
From Discourse Plans to Believable Behavior Generation
Berardina De Carolis | Valeria Carofiglio | Catherine Pelachaud
Proceedings of the International Natural Language Generation Conference