Coralie Vincent1.

Also published as: Coralie Vincent


Multidimensional Coding of Multimodal Languaging in Multi-Party Settings
Christophe Parisse | Marion Blondel | Stéphanie Caët | Claire Danet | Coralie Vincent | Aliyah Morgenstern
Proceedings of the Thirteenth Language Resources and Evaluation Conference

In natural language settings, many interactions include more than two speakers, and real-life interpretation is based on all types of information available in all modalities. This constitutes a challenge for corpus-based analyses because the information in the audio and visual channels must be included in the coding. The goal of the DINLANG project is to tackle that challenge and analyze spontaneous interactions in family dinner settings (two adults and two to three children). The families use either French, or LSF (French sign language). Our aim is to compare how participants share language across the range of modalities found in vocal and visual languaging in coordination with dining. In order to pinpoint similarities and differences, we had to find a common coding tool for all situations (variations from one family to another) and modalities. Our coding procedure incorporates the use of the ELAN software. We created a template organized around participants, situations, and modalities, rather than around language forms. Spoken language transcription can be integrated, when it exists, but it is not mandatory. Data that has been created with another software can be injected in ELAN files if it is linked using time stamps. Analyses performed with the coded files rely on ELAN’s structured search functionalities, which allow to achieve fine-grained temporal analyses and which can be completed by using spreadsheets or R language.


Variation prosodique et traduction poétique (LSF/français) : Que devient la prosodie lorsqu’elle change de canal ? (Prosodic variation and poetic translation (LSF/French): What happens to prosody with a channel change?)
Fanny Catteau | Marion Blondel | Coralie Vincent | Patrice Guyot | Dominique Boutet
Actes de la conférence conjointe JEP-TALN-RECITAL 2016. volume 1 : JEP

L’étude de la prosodie des langues vocales repose en partie sur la mesure des paramètres de durée, d’intensité et de fréquence sonores. Les langues des signes, quant à elles, empruntent le canal visuogestuel et mobilisent des articulateurs manuels et non manuels (buste, tête, éléments du visage). Notre étude a pour objectif d’établir des outils permettant de comparer, au niveau prosodique, la traduction en français de séquences poétiques et la version originale en langue des signes française (LSF). Nous avons recueilli des données vidéo augmentées de capture de mouvement – qui offrent plusieurs pistes d’exploration des paramètres prosodiques pour la LSF – ainsi que des données audio des traductions en français – qui révèlent les stratégies des interprètes pour interpréter la variation prosodique.


The DesPho-APaDy Project: Developing an Acoustic-phonetic Characterization of Dysarthric Speech in French
Cécile Fougeron | Lise Crevier-Buchman | Corinne Fredouille | Alain Ghio | Christine Meunier | Claude Chevrie-Muller | Jean-Francois Bonastre | Antonia Colazo Simon | Céline Delooze | Danielle Duez | Cédric Gendrot | Thierry Legou | Nathalie Levèque | Claire Pillot-Loiseau | Serge Pinto | Gilles Pouchoulin | Danièle Robert | Jacqueline Vaissiere | François Viallet | Coralie Vincent
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

This paper presents the rationale, objectives and advances of an on-going project (the DesPho-APaDy project funded by the French National Agency of Research) which aims to provide a systematic and quantified description of French dysarthric speech, over a large population of patients and three dysarthria types (related to the parkinson's disease, the Amyotrophic Lateral Sclerosis disease, and a pure cerebellar alteration). The two French corpora of dysarthric patients, from which the speech data have been selected for analysis purposes, are firstly described. Secondly, this paper discusses and outlines the requirement of a structured and organized computerized platform in order to store, organize and make accessible (for selected and protected usage) dysarthric speech corpora and associated patients’ clinical information (mostly disseminated in different locations: labs, hospitals, …). The design of both a computer database and a multi-field query interface is proposed for the clinical context. Finally, advances of the project related to the selection of the population used for the dysarthria analysis, the preprocessing of the speech files, their orthographic transcription and their automatic alignment are also presented.