Thierry Dutoit


2018

pdf bib
ASR-based Features for Emotion Recognition: A Transfer Learning Approach
Noé Tits | Kevin El Haddad | Thierry Dutoit
Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)

During the last decade, the applications of signal processing have drastically improved with deep learning. However areas of affecting computing such as emotional speech synthesis or emotion recognition from spoken language remains challenging. In this paper, we investigate the use of a neural Automatic Speech Recognition (ASR) as a feature extractor for emotion recognition. We show that these features outperform the eGeMAPS feature set to predict the valence and arousal emotional dimensions, which means that the audio-to-text mapping learned by the ASR system contains information related to the emotional dimensions in spontaneous speech. We also examine the relationship between first layers (closer to speech) and last layers (closer to text) of the ASR and valence/arousal.

2016

pdf bib
AVAB-DBS: an Audio-Visual Affect Bursts Database for Synthesis
Kevin El Haddad | Hüseyin Çakmak | Stéphane Dupont | Thierry Dutoit
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

It has been shown that adding expressivity and emotional expressions to an agent’s communication systems would improve the interaction quality between this agent and a human user. In this paper we present a multimodal database of affect bursts, which are very short non-verbal expressions with facial, vocal, and gestural components that are highly synchronized and triggered by an identifiable event. This database contains motion capture and audio data of affect bursts representing disgust, startle and surprise recorded at three different levels of arousal each. This database is to be used for synthesis purposes in order to generate affect bursts of these emotions on a continuous arousal level scale.

2014

pdf bib
The AV-LASYN Database : A synchronous corpus of audio and 3D facial marker data for audio-visual laughter synthesis
Hüseyin Çakmak | Jérôme Urbain | Thierry Dutoit | Joëlle Tilmanne
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

A synchronous database of acoustic and 3D facial marker data was built for audio-visual laughter synthesis. Since the aim is to use this database for HMM-based modeling and synthesis, the amount of collected data from one given subject had to be maximized. The corpus contains 251 utterances of laughter from one male participant. Laughter was elicited with the help of humorous videos. The resulting database is synchronous between modalities (audio and 3D facial motion capture data). Visual 3D data is available in common formats such as BVH and C3D with head motion and facial deformation independently available. Data is segmented and audio has been annotated. Phonetic transcriptions are available in the HTK-compatible format. Principal component analysis has been conducted on visual data and has shown that a dimensionality reduction might be relevant. The corpus may be obtained under a research license upon request to authors.

2010

pdf bib
The AVLaughterCycle Database
Jérôme Urbain | Elisabetta Bevacqua | Thierry Dutoit | Alexis Moinet | Radoslaw Niewiadomski | Catherine Pelachaud | Benjamin Picart | Joëlle Tilmanne | Johannes Wagner
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

This paper presents the large audiovisual laughter database recorded as part of the AVLaughterCycle project held during the eNTERFACE’09 Workshop in Genova. 24 subjects participated. The freely available database includes audio signal and video recordings as well as facial motion tracking, thanks to markers placed on the subjects’ face. Annotations of the recordings, focusing on laughter description, are also provided and exhibited in this paper. In total, the corpus contains more than 1000 spontaneous laughs and 27 acted laughs. The laughter utterances are highly variable: the laughter duration ranges from 250ms to 82s and the sounds cover voiced vowels, breath-like expirations, hum-, hiccup- or grunt-like sounds, etc. However, as the subjects had no one to interact with, the database contains very few speech-laughs. Acted laughs tend to be longer than spontaneous ones and are more often composed of voiced vowels. The database can be useful for automatic laughter processing or cognitive science works. For the AVLaughterCycle project, it has served to animate a laughing virtual agent with an output laugh linked to the conversational partner’s input laugh.

2000

pdf bib
EULER: an Open, Generic, Multilingual and Multi-platform Text-to-Speech System
Thierry Dutoit | Michel Bagein | Fabrice Malfrère | Vincent Pagel | Alain Ruelle | Nawfal Tounsi | Dominique Wynsberghe
Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00)