The AV-LASYN Database : A synchronous corpus of audio and 3D facial marker data for audio-visual laughter synthesis

Hüseyin Çakmak, Jérôme Urbain, Thierry Dutoit, Joëlle Tilmanne


Abstract
A synchronous database of acoustic and 3D facial marker data was built for audio-visual laughter synthesis. Since the aim is to use this database for HMM-based modeling and synthesis, the amount of collected data from one given subject had to be maximized. The corpus contains 251 utterances of laughter from one male participant. Laughter was elicited with the help of humorous videos. The resulting database is synchronous between modalities (audio and 3D facial motion capture data). Visual 3D data is available in common formats such as BVH and C3D with head motion and facial deformation independently available. Data is segmented and audio has been annotated. Phonetic transcriptions are available in the HTK-compatible format. Principal component analysis has been conducted on visual data and has shown that a dimensionality reduction might be relevant. The corpus may be obtained under a research license upon request to authors.
Anthology ID:
L14-1179
Volume:
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
Month:
May
Year:
2014
Address:
Reykjavik, Iceland
Venue:
LREC
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
3398–3403
Language:
URL:
http://www.lrec-conf.org/proceedings/lrec2014/pdf/163_Paper.pdf
DOI:
Bibkey:
Cite (ACL):
Hüseyin Çakmak, Jérôme Urbain, Thierry Dutoit, and Joëlle Tilmanne. 2014. The AV-LASYN Database : A synchronous corpus of audio and 3D facial marker data for audio-visual laughter synthesis. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3398–3403, Reykjavik, Iceland. European Language Resources Association (ELRA).
Cite (Informal):
The AV-LASYN Database : A synchronous corpus of audio and 3D facial marker data for audio-visual laughter synthesis (Çakmak et al., LREC 2014)
Copy Citation:
PDF:
http://www.lrec-conf.org/proceedings/lrec2014/pdf/163_Paper.pdf