Sylvie Gibet


2020

pdf bib
LSF-ANIMAL: A Motion Capture Corpus in French Sign Language Designed for the Animation of Signing Avatars
Lucie Naert | Caroline Larboulette | Sylvie Gibet
Proceedings of the 12th Language Resources and Evaluation Conference

Signing avatars allow deaf people to access information in their preferred language using an interactive visualization of the sign language spatio-temporal content. However, avatars are often procedurally animated, resulting in robotic and unnatural movements, which are therefore rejected by the community for which they are intended. To overcome this lack of authenticity, solutions in which the avatar is animated from motion capture data are promising. Yet, the initial data set drastically limits the range of signs that the avatar can produce. Therefore, it can be interesting to enrich the initial corpus with new content by editing the captured motions. For this purpose, we collected the LSF-ANIMAL corpus, a French Sign Language (LSF) corpus composed of captured isolated signs and full sentences that can be used both to study LSF features and to generate new signs and utterances. This paper presents the precise definition and content of this corpus, technical considerations relative to the motion capture process (including the marker set definition), the post-processing steps required to obtain data in a standard motion format and the annotation scheme used to label the data. The quality of the corpus with respect to intelligibility, accuracy and realism is perceptually evaluated by 41 participants including native LSF signers.

2018

pdf bib
CONDUCT: An Expressive Conducting Gesture Dataset for Sound Control
Lei Chen | Sylvie Gibet | Camille Marteau
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2014

pdf bib
A Database of Full Body Virtual Interactions Annotated with Expressivity Scores
Demulier Virginie | Elisabetta Bevacqua | Florian Focone | Tom Giraud | Pamela Carreno | Brice Isableu | Sylvie Gibet | Pierre De Loor | Jean-Claude Martin
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

Recent technologies enable the exploitation of full body expressions in applications such as interactive arts but are still limited in terms of dyadic subtle interaction patterns. Our project aims at full body expressive interactions between a user and an autonomous virtual agent. The currently available databases do not contain full body expressivity and interaction patterns via avatars. In this paper, we describe a protocol defined to collect a database to study expressive full-body dyadic interactions. We detail the coding scheme for manually annotating the collected videos. Reliability measures for global annotations of expressivity and interaction are also provided.

2013

pdf bib
Interactive editing of utterances in French sign language dedicated to signing avatars (Édition interactive d’énoncés en langue des signes française dédiée aux avatars signeurs) [in French]
Ludovic Hamon | Sylvie Gibet | Sabah Boustila
Proceedings of TALN 2013 (Volume 2: Short Papers)

2010

pdf bib
Heterogeneous Data Sources for Signed Language Analysis and Synthesis: The SignCom Project
Kyle Duarte | Sylvie Gibet
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

This paper describes how heterogeneous data sources captured in the SignCom project may be used for the analysis and synthesis of French Sign Language (LSF) utterances. The captured data combine video data and multimodal motion capture (mocap) data, including body and hand movements as well as facial expressions. These data are pre-processed, synchronized, and enriched by text annotations of signed language elicitation sessions. The addition of mocap data to traditional data structures provides additional phonetic data to linguists who desire to better understand the various parts of signs (handshape, movement, orientation, etc.) to very exacting levels, as well as their interactions and relative timings. We show how the phonologies of hand configurations and articulator movements may be studied using signal processing and statistical analysis tools to highlight regularities or temporal schemata between the different modalities. Finally, mocap data allows us to replay signs using a computer animation engine, specifically editing and rearranging movements and configurations in order to create novel utterances.