@inproceedings{schabus-etal-2012-building,
    title = "Building a synchronous corpus of acoustic and 3{D} facial marker data for adaptive audio-visual speech synthesis",
    author = "Schabus, Dietmar  and
      Pucher, Michael  and
      Hofer, Gregor",
    editor = "Calzolari, Nicoletta  and
      Choukri, Khalid  and
      Declerck, Thierry  and
      Do{\u{g}}an, Mehmet U{\u{g}}ur  and
      Maegaard, Bente  and
      Mariani, Joseph  and
      Moreno, Asuncion  and
      Odijk, Jan  and
      Piperidis, Stelios",
    booktitle = "Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}'12)",
    month = may,
    year = "2012",
    address = "Istanbul, Turkey",
    publisher = "European Language Resources Association (ELRA)",
    url = "https://preview.aclanthology.org/ingest-emnlp/L12-1136/",
    pages = "3313--3316",
    abstract = "We have created a synchronous corpus of acoustic and 3D facial marker data from multiple speakers for adaptive audio-visual text-to-speech synthesis. The corpus contains data from one female and two male speakers and amounts to 223 Austrian German sentences each. In this paper, we first describe the recording process, using professional audio equipment and a marker-based 3D facial motion capturing system for the audio-visual recordings. We then turn to post-processing, which incorporates forced alignment, principal component analysis (PCA) on the visual data, and some manual checking and corrections. Finally, we describe the resulting corpus, which will be released under a research license at the end of our project. We show that the standard PCA based feature extraction approach also works on a multi-speaker database in the adaptation scenario, where there is no data from the target speaker available in the PCA step."
}Markdown (Informal)
[Building a synchronous corpus of acoustic and 3D facial marker data for adaptive audio-visual speech synthesis](https://preview.aclanthology.org/ingest-emnlp/L12-1136/) (Schabus et al., LREC 2012)
ACL