This is an internal, incomplete preview of a proposed change to the ACL Anthology.
For efficiency reasons, we don't generate MODS or Endnote formats, and the preview may be incomplete in other ways, or contain mistakes.
Do not treat this content as an official publication.
AlbertRilliard
Fixing paper assignments
Please select all papers that belong to the same person.
Indicate below which author they should be assigned to.
Cette étude présente un test perceptif évaluant les indices permettant la planification de la prise de parole lors d’interactions orales spontanées. Des Unités Inter-Pauses (IPU) ont été extraites de dialogues du corpus REPERE et annotées en terminalité. Afin de déterminer quels paramètres affectent les jugements de la possibilité de prendre la parole, les stimulus ont été présentés sous forme audio ou textuelle.Les participant·es devaient indiquer la possibilité de prendre la parole «~Maintenant~», «~Bientôt~» ou «~Pas encore~», à la fin des IPU tronqués de 0 à 3 mots prosodiques. Les participant·es sont moins susceptibles de prendre la parole pour les frontières non terminales en modalité audio que textuelle. La modalité audio permet également d’anticiper une fin de tour de parole au moins trois mots avant sa fin, tandis que la modalité textuelle permet moins d’anticipation. Ces résultats soutiennent l’importance des indices contenus dans la parole pour la planification des interactions dialogiques.
Few speech resources describe interruption phenomena, especially for TV and media content. The description of these phenomena may vary across authors: it thus leaves room for improved annotation protocols. We present an annotation of Transition-Relevance Places (TRP) and Floor-Taking event types on an existing French TV and Radio broadcast corpus to facilitate studies of interruptions and turn-taking. Each speaker change is annotated with the presence or absence of a TRP, and a classification of the next-speaker floor-taking as Smooth, Backchannel or different types of turn violations (cooperative or competitive, successful or attempted interruption). An inter-rater agreement analysis shows such annotations’ moderate to substantial reliability. The inter-annotator agreement for TRP annotation reaches κ=0.75, κ=0.56 for Backchannel and κ=0.5 for the Interruption/non-interruption distinction. More precise differences linked to cooperative or competitive behaviors lead to lower agreements. These results underline the importance of low-level features like TRP to derive a classification of turn changes that would be less subject to interpretation. The analysis of the presence of overlapping speech highlights the existence of interruptions without overlaps and smooth transitions with overlaps. These annotations are available at https://lium.univ-lemans.fr/corpus-allies/.
Le projet ANR Gender Equality Monitor (GEM) est coordonné par l’Institut National de l’Audiovisuel(INA) et vise à étudier la place des femmes dans les médias (radio et télévision). Dans cette soumission,nous présentons le travail réalisé au LISN : (i) étude diachronique des caractéristiques acoustiquesde la voix en fonction du genre et de l’âge, (ii) comparaison acoustique de la voix des femmeset hommes politiques montrant une incohérence entre performance vocale et commentaires sur lavoix, (iii) réalisation d’un système automatique d’estimation de la féminité perçue à partir descaractéristiques vocales, (iv) comparaison de systèmes de segmentation thématique de transcriptionsautomatiques de données audiovisuelles, (v) mesure des biais sociétaux dans les modèles de languedans un contexte multilingue et multi-culturel, et (vi) premiers essais d’identification de la publicitéen fonction du genre du locuteur.
This paper presents a semi-automatic approach to create a diachronic corpus of voices balanced for speaker’s age, gender, and recording period, according to 32 categories (2 genders, 4 age ranges and 4 recording periods). Corpora were selected at French National Institute of Audiovisual (INA) to obtain at least 30 speakers per category (a total of 960 speakers; only 874 have be found yet). For each speaker, speech excerpts were extracted from audiovisual documents using an automatic pipeline consisting of speech detection, background music and overlapped speech removal and speaker diarization, used to present clean speaker segments to human annotators identifying target speakers. This pipeline proved highly effective, cutting down manual processing by a factor of ten. Evaluation of the quality of the automatic processing and of the final output is provided. It shows the automatic processing compare to up-to-date process, and that the output provides high quality speech for most of the selected excerpts. This method is thus recommendable for creating large corpora of known target speakers.
We present a French corpus of political interviews labeled at the utterance level according to expressive dimensions such as Arousal. This corpus consists of 7.5 hours of high-quality audio-visual recordings with transcription. At the time of this publication, 1 hour of speech was segmented into short utterances, each manually annotated in Arousal. Our segmentation approach differs from similar corpora and allows us to perform an automatic Arousal prediction baseline by building a speech-based classification model. Although this paper focuses on the acoustic expression of Arousal, it paves the way for future work on conflictual and hostile expression recognition as well as multimodal architectures.
Le présent travail se propose de renouveler les traditionnels atlas dialectologiques pour cartographier les variantes de prononciation en français, à travers un site internet. La toile est utilisée non seulement pour collecter des données, mais encore pour disséminer les résultats auprès des chercheurs et du grand public. La méthodologie utilisée, à base de crowdsourcing (ou « production participative »), nous a permis de recueillir des informations auprès de 2500 francophones d’Europe (France, Belgique, Suisse). Une plateforme dynamique à l’interface conviviale a ensuite été développée pour cartographier la prononciation de 70 mots dans les différentes régions des pays concernés (des mots notamment à voyelle moyenne ou dont la consonne finale peut être prononcée ou non). Les options de visualisation par département/canton/province ou par région, combinant plusieurs traits de prononciation et ensembles de mots, sous forme de pastilles colorées, de hachures, etc. sont présentées dans cet article. On peut ainsi observer immédiatement un /E/ plus fermé (ainsi qu’un /O/ plus ouvert) dans le Nord-Pas-de-Calais et le sud de la France, pour des mots comme parfait ou rose, un /Œ/ plus fermé en Suisse pour un mot comme gueule, par exemple.
Text and speech corpora for training a tale telling robot have been designed, recorded and annotated. The aim of these corpora is to study expressive storytelling behaviour, and to help in designing expressive prosodic and co-verbal variations for the artificial storyteller). A set of 89 children tales in French serves as a basis for this work. The tales annotation principles and scheme are described, together with the corpus description in terms of coverage and inter-annotator agreement. Automatic analysis of a new tale with the help of this corpus and machine learning is discussed. Metrics for evaluation of automatic annotation methods are discussed. A speech corpus of about 1 hour, with 12 tales has been recorded and aligned and annotated. This corpus is used for predicting expressive prosody in children tales, above the level of the sentence.
A Hungarian multimodal spontaneous expressive speech corpus was recorded following the methodology of a similar French corpus. The method relied on a Wizard of Oz scenario-based induction of varying affective states. The subjects were interacting with a supposedly voice-recognition driven computer application using simple command words. Audio and video signals were captured for the 7 recorded subjects. After the experiment, the subjects watched the video recording of their session and labelled the recorded corpus themselves, freely describing the evolution of their affective states. The obtained labels were later classified into one of the following broad emotional categories: satisfaction, dislike, stress, or other. A listening test was performed by 25 naïve listeners in order to validate the category labels originating from the self-labelling. For 52 of the 149 stimuli, listeners judgements of the emotional content were in agreement with the labels. The result of the listening test was compared with an earlier test validating a part of the French corpus. While the French test had a higher success ratio, validating the labels of 79 tested stimuli, out of the 193, the stimuli validated by the two tests can form the basis of cross linguistic comparison experiments.