Valentin Pelloin
2026
spINAch: A Diachronic Corpus of French Broadcast Speech Controlled for Speakers’ Age and Gender
Simon Devauchelle | David Doukhan | Remi Uro | Lucas Ondel | Valentin Pelloin | Olympia Imbert-Brégégère | Véronique Lefort | Kévin Picard | Emeline Seignobos | Albert Rilliard
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Simon Devauchelle | David Doukhan | Remi Uro | Lucas Ondel | Valentin Pelloin | Olympia Imbert-Brégégère | Véronique Lefort | Kévin Picard | Emeline Seignobos | Albert Rilliard
Proceedings of the Fifteenth Language Resources and Evaluation Conference
We present spINAch, a large diachronic corpus of French speech from radio and television archives, balanced by speakers’ gender, age (20-95 years old), and spanning 60 years from 1955 to 2015. The dataset includes over 320 hours of recordings from more than two thousand speakers. The methodology for building the corpus is described, focusing on the quality of collected samples in acoustic terms. The data were automatically transcribed and phonetically aligned to allow studies at a phonemic level. More than 3 million oral vowels have been analyzed to propose their fundamental frequency and formants. The corpus, available to the community for research purposes, is valuable for describing the evolution of Parisian French through the representation of gender and age. The presented analyses also demonstrate that the diachronic nature of the corpus allows the observation of various phonetic phenomena, such as the evolution of voice pitch over time (which does not differ by gender in our data) and the neutralization of the /a/-/ɑ/ opposition in Parisian French during this period.
Pantagruel: Unified Self-Supervised Encoders for French Text and Speech
Phuong-Hang Le | Valentin Pelloin | Arnault Chatelain | Maryem Bouziane | Mohammed Ghennai | Qianwen Guan | Kirill Milintsevich | Salima Mdhaffar | Aidan Mannion | Nils Defauw | Shuyue Gu | Alexandre Daniel Audibert | Marco Dinarelli | Yannick Estève | Lorraine Goeuriot | Steffen Lalande | Nicolas Hervé | Maximin Coavoux | François Portet | Étienne Ollion | Marie Candito | Maxime Peyrard | Solange Rossato | Benjamin Lecouteux | Aurélie Nardy | Gilles Sérasset | Vincent Segonne | Solène Evain | Diandra Fabre | Didier Schwab
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Phuong-Hang Le | Valentin Pelloin | Arnault Chatelain | Maryem Bouziane | Mohammed Ghennai | Qianwen Guan | Kirill Milintsevich | Salima Mdhaffar | Aidan Mannion | Nils Defauw | Shuyue Gu | Alexandre Daniel Audibert | Marco Dinarelli | Yannick Estève | Lorraine Goeuriot | Steffen Lalande | Nicolas Hervé | Maximin Coavoux | François Portet | Étienne Ollion | Marie Candito | Maxime Peyrard | Solange Rossato | Benjamin Lecouteux | Aurélie Nardy | Gilles Sérasset | Vincent Segonne | Solène Evain | Diandra Fabre | Didier Schwab
Proceedings of the Fifteenth Language Resources and Evaluation Conference
We release Pantagruel models, a new family of self-supervised encoder models for French text and speech. Instead of predicting modality-tailored targets such as textual tokens or speech units, Pantagruel learns contextualized target representations in the feature space, allowing modality-specific encoders to capture linguistic and acoustic regularities more effectively. Separate models are pre-trained on large-scale French corpora, including Wikipedia, OSCAR and CroissantLLM for text, together with MultilingualLibriSpeech, LeBenchmark, and INA-100k for speech. INA-100k is a newly introduced 100,000-hour corpus of French audio derived from the archives of the Institut National de l’Audiovisuel (INA), the national repository of French radio and television broadcasts, providing highly diverse audio data. We evaluate Pantagruel across a broad range of downstream tasks spanning both modalities, including those from the standard French benchmarks such as FLUE or LeBenchmark. Across these tasks, Pantagruel models show competitive or superior performance compared to strong French baselines such as CamemBERT, FlauBERT, and LeBenchmark2.0, while maintaining a shared architecture that can seamlessly handle either speech or text inputs. These results confirm the effectiveness of feature-space self-supervised objectives for French representation learning and highlight Pantagruel as a robust foundation for multimodal speech-text understanding.
Data Selection Effects on Self-Supervised Learning of Audio Representations for French Audiovisual Broadcasts
Valentin Pelloin | Lina Bekkali | Reda Dehak | David Doukhan
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Valentin Pelloin | Lina Bekkali | Reda Dehak | David Doukhan
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Audio and speech self-supervised encoder models are now widely used for a lot of different tasks. Many of these models are often trained on clean segmented speech content such as LibriSpeech. In this paper, we look into how the pretraining datasets of such SSL (Self-Supervised Learning) models impact their downstream results. We build a large pretraining corpus of highly diverse TV and Radio broadcast audio content, which we describe with automatic tools. We use these annotations to build smaller subsets, which we use to train audio SSL models. Then, we evaluate the models on multiple downstream tasks such as automatic speech recognition, voice activity and music detection, or speaker recognition. The results show the potential of pretraining SSL models on diverse audio content without restricting it to speech. We also perform a membership inference attack to evaluate the encoder ability to memorize their training datasets, which highlight the importance of data deduplication. This unified training could bridge speech and music machine learning communities.
2022
Using ASR-Generated Text for Spoken Language Modeling
Nicolas Hervé | Valentin Pelloin | Benoit Favre | Franck Dary | Antoine Laurent | Sylvain Meignier | Laurent Besacier
Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models
Nicolas Hervé | Valentin Pelloin | Benoit Favre | Franck Dary | Antoine Laurent | Sylvain Meignier | Laurent Besacier
Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models
This papers aims at improving spoken language modeling (LM) using very large amount of automatically transcribed speech. We leverage the INA (French National Audiovisual Institute) collection and obtain 19GB of text after applying ASR on 350,000 hours of diverse TV shows. From this, spoken language models are trained either by fine-tuning an existing LM (FlauBERT) or through training a LM from scratch. The new models (FlauBERT-Oral) will be shared with the community and are evaluated not only in terms of word prediction accuracy but also for two downstream tasks : classification of TV shows and syntactic parsing of speech. Experimental results show that FlauBERT-Oral is better than its initial FlauBERT version demonstrating that, despite its inherent noisy nature, ASR-Generated text can be useful to improve spoken language modeling.
Impact Analysis of the Use of Speech and Language Models Pretrained by Self-Supersivion for Spoken Language Understanding
Salima Mdhaffar | Valentin Pelloin | Antoine Caubrière | Gaëlle Laperriere | Sahar Ghannay | Bassam Jabaian | Nathalie Camelin | Yannick Estève
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Salima Mdhaffar | Valentin Pelloin | Antoine Caubrière | Gaëlle Laperriere | Sahar Ghannay | Bassam Jabaian | Nathalie Camelin | Yannick Estève
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Pretrained models through self-supervised learning have been recently introduced for both acoustic and language modeling. Applied to spoken language understanding tasks, these models have shown their great potential by improving the state-of-the-art performances on challenging benchmark datasets. In this paper, we present an error analysis reached by the use of such models on the French MEDIA benchmark dataset, known as being one of the most challenging benchmarks for the slot filling task among all the benchmarks accessible to the entire research community. One year ago, the state-of-art system reached a Concept Error Rate (CER) of 13.6% through the use of a end-to-end neural architecture. Some months later, a cascade approach based on the sequential use of a fine-tuned wav2vec2.0 model and a fine-tuned BERT model reaches a CER of 11.2%. This significant improvement raises questions about the type of errors that remain difficult to treat, but also about those that have been corrected using these models pre-trained through self-supervision learning on a large amount of data. This study brings some answers in order to better understand the limits of such models and open new perspectives to continue improving the performance.
The Spoken Language Understanding MEDIA Benchmark Dataset in the Era of Deep Learning: data updates, training and evaluation tools
Gaëlle Laperrière | Valentin Pelloin | Antoine Caubrière | Salima Mdhaffar | Nathalie Camelin | Sahar Ghannay | Bassam Jabaian | Yannick Estève
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Gaëlle Laperrière | Valentin Pelloin | Antoine Caubrière | Salima Mdhaffar | Nathalie Camelin | Sahar Ghannay | Bassam Jabaian | Yannick Estève
Proceedings of the Thirteenth Language Resources and Evaluation Conference
With the emergence of neural end-to-end approaches for spoken language understanding (SLU), a growing number of studies have been presented during these last three years on this topic. The major part of these works addresses the spoken language understanding domain through a simple task like speech intent detection. In this context, new benchmark datasets have also been produced and shared with the community related to this task. In this paper, we focus on the French MEDIA SLU dataset, distributed since 2005 and used as a benchmark dataset for a large number of research works. This dataset has been shown as being the most challenging one among those accessible to the research community. Distributed by ELRA, this corpus is free for academic research since 2019. Unfortunately, the MEDIA dataset is not really used beyond the French research community. To facilitate its use, a complete recipe, including data preparation, training and evaluation scripts, has been built and integrated to SpeechBrain, an already popular open-source and all-in-one conversational AI toolkit based on PyTorch. This recipe is presented in this paper. In addition, based on the feedback of some researchers who have worked on this dataset for several years, some corrections have been brought to the initial manual annotation: the new version of the data will also be integrated into the ELRA catalogue, as the original one. More, a significant amount of data collected during the construction of the MEDIA corpus in the 2000s was never used until now: we present the first results reached on this subset — also included in the MEDIA SpeechBrain recipe — , that will be used for now as the MEDIA test2. Last, we discuss evaluation issues.
2020
Apprentissage de plongements de mots sur des corpus en langue de spécialité : une étude d’impact (Learning word embeddings on domain specific corpora : an impact study )
Valentin Pelloin | Thibault Prouteau
Actes de la 6e conférence conjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition). Volume 3 : Rencontre des Étudiants Chercheurs en Informatique pour le TAL
Valentin Pelloin | Thibault Prouteau
Actes de la 6e conférence conjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition). Volume 3 : Rencontre des Étudiants Chercheurs en Informatique pour le TAL
Les méthodes d’apprentissage de plongements lexicaux constituent désormais l’état de l’art pour la représentation du vocabulaire et des documents sous forme de vecteurs dans de nombreuses tâches de Traitement Automatique du Langage Naturel (TALN). Dans ce travail, nous considérons l’apprentissage et l’usage de plongements lexicaux dans le cadre de corpus en langue de spécialité de petite taille. En particulier, nous souhaitons savoir si dans ce cadre, il est préférable d’utiliser des plongements préappris sur des corpus très volumineux tels Wikipédia ou bien s’il est préférable d’apprendre des plongements sur ces corpus en langue de spécialité. Pour répondre à cette question, nous considérons deux corpus en langue de spécialité : O HSUMED issu du domaine médical, et un corpus de documentation technique, propriété de SNCF. Après avoir introduit ces corpus et évalué leur spécificité, nous définissons une tâche de classification. Pour cette tâche, nous choisissons d’utiliser en entrée d’un classifieur neuronal des représentations des documents qui sont soit basées sur des plongements appris sur les corpus de spécialité, soit sur des plongements appris sur Wikipédia. Notre analyse montre que les plongements appris sur Wikipédia fournissent de très bons résultats. Ceux-ci peuvent être utilisés comme une référence fiable, même si dans le cas d’O HSUMED, il vaut mieux apprendre des plongements sur ce même corpus. La discussion des résultats se fait en interrogeant les spécificités des deux corpus, mais ne permet pas d’établir clairement dans quels cas apprendre des plongements spécifiques au corpus.
Search
Fix author
Co-authors
- Yannick Estève 3
- Salima Mdhaffar 3
- Nathalie Camelin 2
- Antoine Caubrière 2
- David Doukhan 2
- Sahar Ghannay 2
- Nicolas Hervé 2
- Bassam Jabaian 2
- Gaëlle Laperrière 2
- Alexandre Daniel Audibert 1
- Lina Bekkali 1
- Laurent Besacier 1
- Maryem Bouziane 1
- Marie Candito 1
- Arnault Chatelain 1
- Maximin Coavoux 1
- Franck Dary 1
- Nils Defauw 1
- Reda Dehak 1
- Simon Devauchelle 1
- Marco Dinarelli 1
- Solène Evain 1
- Diandra Fabre 1
- Benoit Favre 1
- Mohammed Ghennai 1
- Lorraine Goeuriot 1
- Shuyue Gu 1
- Qianwen Guan 1
- Olympia Imbert-Brégégère 1
- Steffen Lalande 1
- Antoine Laurent 1
- Phuong-Hang Le 1
- Benjamin Lecouteux 1
- Véronique Lefort 1
- Aidan Mannion 1
- Sylvain Meignier 1
- Kirill Milintsevich 1
- Aurélie Nardy 1
- Etienne Ollion 1
- Lucas Ondel 1
- Maxime Peyrard 1
- Kévin Picard 1
- François Portet 1
- Thibault Prouteau 1
- Albert Rilliard 1
- Solange Rossato 1
- Didier Schwab 1
- Vincent Segonne 1
- Emeline Seignobos 1
- Gilles Sérasset 1
- Rémi Uro 1