Jérôme Urbain
2014
The AV-LASYN Database : A synchronous corpus of audio and 3D facial marker data for audio-visual laughter synthesis
Hüseyin Çakmak
|
Jérôme Urbain
|
Thierry Dutoit
|
Joëlle Tilmanne
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
A synchronous database of acoustic and 3D facial marker data was built for audio-visual laughter synthesis. Since the aim is to use this database for HMM-based modeling and synthesis, the amount of collected data from one given subject had to be maximized. The corpus contains 251 utterances of laughter from one male participant. Laughter was elicited with the help of humorous videos. The resulting database is synchronous between modalities (audio and 3D facial motion capture data). Visual 3D data is available in common formats such as BVH and C3D with head motion and facial deformation independently available. Data is segmented and audio has been annotated. Phonetic transcriptions are available in the HTK-compatible format. Principal component analysis has been conducted on visual data and has shown that a dimensionality reduction might be relevant. The corpus may be obtained under a research license upon request to authors.
2010
The AVLaughterCycle Database
Jérôme Urbain
|
Elisabetta Bevacqua
|
Thierry Dutoit
|
Alexis Moinet
|
Radoslaw Niewiadomski
|
Catherine Pelachaud
|
Benjamin Picart
|
Joëlle Tilmanne
|
Johannes Wagner
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)
This paper presents the large audiovisual laughter database recorded as part of the AVLaughterCycle project held during the eNTERFACE09 Workshop in Genova. 24 subjects participated. The freely available database includes audio signal and video recordings as well as facial motion tracking, thanks to markers placed on the subjects face. Annotations of the recordings, focusing on laughter description, are also provided and exhibited in this paper. In total, the corpus contains more than 1000 spontaneous laughs and 27 acted laughs. The laughter utterances are highly variable: the laughter duration ranges from 250ms to 82s and the sounds cover voiced vowels, breath-like expirations, hum-, hiccup- or grunt-like sounds, etc. However, as the subjects had no one to interact with, the database contains very few speech-laughs. Acted laughs tend to be longer than spontaneous ones and are more often composed of voiced vowels. The database can be useful for automatic laughter processing or cognitive science works. For the AVLaughterCycle project, it has served to animate a laughing virtual agent with an output laugh linked to the conversational partners input laugh.
Search
Co-authors
- Thierry Dutoit 2
- Joëlle Tilmanne 2
- Hüseyin Çakmak 1
- Elisabetta Bevacqua 1
- Alexis Moinet 1
- show all...
Venues
- lrec2