This is an internal, incomplete preview of a proposed change to the ACL Anthology.
For efficiency reasons, we don't generate MODS or Endnote formats, and the preview may be incomplete in other ways, or contain mistakes.
Do not treat this content as an official publication.
ThierryLegou
Fixing paper assignments
Please select all papers that belong to the same person.
Indicate below which author they should be assigned to.
La quantité croissante de corpus multimodaux collectés permet de développer de nouvelles méthodes d’analyse de la conversation. Dans la très grande majorité des cas, ces corpus ne comprennent cependant que les enregistrements audio et vidéo, laissant de côté d’autres modalités plus difficiles à récupérer mais apportant un point de vue complémentaire sur la conversation, telle que l’activité cérébrale des locuteurs. Nous présentons donc BrainKT, un corpus de conversation naturelle en français, rassemblant les données audio, vidéo et signaux neurophysiologiques, collecté avec l’objectif d’étudier en profondeur les transmission d’information et l’instanciation du common ground. Pour chacune des conversations des 28 dyades (56 participants), les locuteurs devaient collaborer sur un jeu conversationnel (15min), et étaient ensuite libres de discuter du sujet de leur choix (15min). Pour chaque discussion, les données audio, vidéo, l’activité cérébrale (EEG par Biosemi 64) et physiologique (montre Empatica-E4) sont enregistrées. Cet article situe le corpus dans la littérature, présente le setup expérimental utilisé ainsi les difficultés rencontrées, et les différents niveaux d’annotations proposés pour le corpus.
An increasing amount of multimodal recordings has been paving the way for the development of a more automatic way to study language and conversational interactions. However this data largely comprises of audio and video recordings, leaving aside other modalities that might complement this external view of the conversation but might be more difficult to collect in naturalistic setups, such as participants brain activity. In this context, we present BrainKT, a natural conversational corpus with audio, video and neuro-physiological signals, collected with the aim of studying information exchanges and common ground instantiation in conversation in a new, more in-depth way. We recorded conversations from 28 dyads (56 participants) during 30 minutes experiments where subjects were first tasked to collaborate on a joint information game, then freely drifted to the topic of their choice. During each session, audio and video were captured, along with the participants’ neural signal (EEG with Biosemi 64) and their electro-physiological activity (with Empatica-E4). The paper situates this new type of resources in the literature, presents the experimental setup and describes the different kinds of annotations considered for the corpus.
Conversations (normal speech) or professional interactions (e.g., projected speech in the classroom) have been identified as situations with increased risk of exposure to SARS-CoV-2 due to the high production of droplets in the exhaled air. However, it is still unclear to what extent speech properties influence droplets emission during everyday life conversations. Here, we report the experimental protocol of three experiments aiming at measuring the velocity and the direction of the airflow, the number and size of droplets spread during speech interactions in French. We consider different phonetic conditions, potentially leading to a modulation of speech droplets production, such as voice intensity (normal vs. loud voice), articulation manner of phonemes (type of consonants and vowels) and prosody (i.e., the melody of the speech). Findings from these experiments will allow future simulation studies to predict the transport, dispersion and evaporation of droplets emitted under different speech conditions.
We present in this paper the first natural conversation corpus recorded with all modalities and neuro-physiological signals. 5 dyads (10 participants) have been recorded three times, during three sessions (30mns each) with 4 days interval. During each session, audio and video are captured as well as the neural signal (EEG with Emotiv-EPOC) and the electro-physiological one (with Empatica-E4). This resource original in several respects. Technically, it is the first one gathering all these types of data in a natural conversation situation. Moreover, the recording of the same dyads at different periods opens the door to new longitudinal investigations such as the evolution of interlocutors’ alignment during the time. The paper situates this new type of resources with in the literature, presents the experimental setup and describes different annotations enriching the corpus.
Les muscles laryngés et articulatoires sont impliqués dans la réalisation des traits qui distinguent les phonèmes. Cette étude porte sur l’auto-perception par les locuteurs et la répartition de l’effort vocal et articulatoire en fonction du trait de voisement en parole modale comparée à la parole chuchotée en français. Pour les 12 obstruantes du français, l’effort est ressenti plus important pour les voisées que les non voisées correspondantes, excepté dans le cas des fricatives labiodentales. Les analyses de la production des occlusives bilabiales montrent que l’effort laryngé est supérieur pour les consonnes voisées et l’effort articulatoire supérieur pour les non voisées, mais l’inverse pour les fricatives. Ces résultats indiquent que l’effort ressenti lors de sa propre production repose sur une perception prédominante de l’effort laryngé sur l’effort articulatoire en voix modale comme en voix chuchotée ; mais qu’il est cependant modulé selon le lieu et le mode d’articulation des consonnes.
This paper presents the TYPALOC corpus of French Dysarthric and Healthy speech and the rationale underlying its constitution. The objective is to compare phonetic variation in the speech of dysarthric vs. healthy speakers in different speech conditions (read and unprepared speech). More precisely, we aim to compare the extent, types and location of phonetic variation within these different populations and speech conditions. The TYPALOC corpus is constituted of a selection of 28 dysarthric patients (three different pathologies) and of 12 healthy control speakers recorded while reading the same text and in a more natural continuous speech condition. Each audio signal has been segmented into Inter-Pausal Units. Then, the corpus has been manually transcribed and automatically aligned. The alignment has been corrected by an expert phonetician. Moreover, the corpus benefits from an automatic syllabification and an Automatic Detection of Acoustic Phone-Based Anomalies. Finally, in order to interpret phonetic variations due to pathologies, a perceptual evaluation of each patient has been conducted. Quantitative data are provided at the end of the paper.
This paper presents the rationale, objectives and advances of an on-going project (the DesPho-APaDy project funded by the French National Agency of Research) which aims to provide a systematic and quantified description of French dysarthric speech, over a large population of patients and three dysarthria types (related to the parkinson's disease, the Amyotrophic Lateral Sclerosis disease, and a pure cerebellar alteration). The two French corpora of dysarthric patients, from which the speech data have been selected for analysis purposes, are firstly described. Secondly, this paper discusses and outlines the requirement of a structured and organized computerized platform in order to store, organize and make accessible (for selected and protected usage) dysarthric speech corpora and associated patients clinical information (mostly disseminated in different locations: labs, hospitals, â¦). The design of both a computer database and a multi-field query interface is proposed for the clinical context. Finally, advances of the project related to the selection of the population used for the dysarthria analysis, the preprocessing of the speech files, their orthographic transcription and their automatic alignment are also presented.