Abstract
We have created a synchronous corpus of acoustic and 3D facial marker data from multiple speakers for adaptive audio-visual text-to-speech synthesis. The corpus contains data from one female and two male speakers and amounts to 223 Austrian German sentences each. In this paper, we first describe the recording process, using professional audio equipment and a marker-based 3D facial motion capturing system for the audio-visual recordings. We then turn to post-processing, which incorporates forced alignment, principal component analysis (PCA) on the visual data, and some manual checking and corrections. Finally, we describe the resulting corpus, which will be released under a research license at the end of our project. We show that the standard PCA based feature extraction approach also works on a multi-speaker database in the adaptation scenario, where there is no data from the target speaker available in the PCA step.- Anthology ID:
- L12-1136
- Volume:
- Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
- Month:
- May
- Year:
- 2012
- Address:
- Istanbul, Turkey
- Venue:
- LREC
- SIG:
- Publisher:
- European Language Resources Association (ELRA)
- Note:
- Pages:
- 3313–3316
- Language:
- URL:
- http://www.lrec-conf.org/proceedings/lrec2012/pdf/302_Paper.pdf
- DOI:
- Cite (ACL):
- Dietmar Schabus, Michael Pucher, and Gregor Hofer. 2012. Building a synchronous corpus of acoustic and 3D facial marker data for adaptive audio-visual speech synthesis. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 3313–3316, Istanbul, Turkey. European Language Resources Association (ELRA).
- Cite (Informal):
- Building a synchronous corpus of acoustic and 3D facial marker data for adaptive audio-visual speech synthesis (Schabus et al., LREC 2012)
- PDF:
- http://www.lrec-conf.org/proceedings/lrec2012/pdf/302_Paper.pdf