Valentin Vielzeuf
2026
Forewarned Is Forearmed: When Non-Sequential Embedding Turns into an Anomaly Detector
Elys Allesiardo | Antoine Caubrière | Valentin Vielzeuf
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Elys Allesiardo | Antoine Caubrière | Valentin Vielzeuf
Proceedings of the Fifteenth Language Resources and Evaluation Conference
This paper offers an in-depth analysis of non-sequential multimodal sentence-level embeddings, with a particular focus on the SONAR model. We demonstrate that certain embedding dimensions are sensitive to perturbations and can serve as indicators of decoding anomalies. By leveraging the consistency between successive encoding and decoding, we successfully build an accurate detector. Additionally, we explore modifying specific dimensions of interest to attempt to correct them. This work underscores the importance of understanding and analyzing the embeddings themselves to enhance the reliability of multimodal representations.
SENS-ASR: Semantic Embedding Injection in Neural-transducer for Streaming Automatic Speech Recognition
Youness Dkhissi | Valentin Vielzeuf | Elys Allesiardo | Anthony Larcher
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Youness Dkhissi | Valentin Vielzeuf | Elys Allesiardo | Anthony Larcher
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Many Automatic Speech Recognition (ASR) applications require streaming processing of the audio data. In streaming mode, ASR systems need to start transcribing the input stream before it is complete, i.e., the systems have to process a stream of inputs with a limited (or no) future context. Compared to offline mode, this reduction of the future context degrades the performance of Streaming-ASR systems, especially while working with low-latency constraint. In this work, we present SENS-ASR, an approach to enhance the transcription quality of Streaming-ASR by reinforcing the acoustic information with semantic information. This semantic information is extracted from the available past frame-embeddings by a context module. This module is trained using knowledge distillation from a sentence embedding Language Model fine-tuned on the training dataset transcriptions. Experiments on standard datasets show that SENS-ASR significantly improves the Word Error Rate on small-chunk streaming scenarios.
The Speech-LLM Takes It All: A Truly Fully End-to-End Spoken Dialog State Tracking Approach
Nizar El Ghazal | Antoine Caubrière | Valentin Vielzeuf
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Nizar El Ghazal | Antoine Caubrière | Valentin Vielzeuf
Proceedings of the Fifteenth Language Resources and Evaluation Conference
This paper presents a comparative study of context management strategies for end-to-end Spoken Dialog State Tracking using Speech-LLMs. We systematically evaluate traditional multimodal context (combining text history and spoken current turn), full spoken history, and compressed spoken history approaches. Our experiments on the SpokenWOZ corpus demonstrate that providing the full spoken conversation as input yields the highest performance among models of similar size, significantly surpassing prior methods. Furthermore, we show that attention-pooling-based compression of the spoken history offers a strong trade-off, maintaining competitive accuracy with reduced context size. Detailed analysis confirms that improvements stem from more effective context utilization.
2024
Towards efficient self-supervised representation learning in speech processing
Luis Lugo | Valentin Vielzeuf
Findings of the Association for Computational Linguistics: EACL 2024
Luis Lugo | Valentin Vielzeuf
Findings of the Association for Computational Linguistics: EACL 2024
Self-supervised learning has achieved impressive results in speech processing, but current models are computationally expensive, generating environmental concerns because of their high energy consumption. Therefore, we propose an efficient self-supervised approach to address high computational costs, using a single GPU during 24 to 48 hours of pretraining. The proposed approach combines linear, convolutional, and self-attention layers with several optimizations, including dynamic batching, flash attention, mixed-precision training, gradient accumulation, and acoustic feature extraction with input preprocessing. Computational cost estimations for our proposed model represent up to two orders of magnitude improvements in computational efficiency against existing speech models.
2023
OLISIA: a Cascade System for Spoken Dialogue State Tracking
Léo Jacqmin | Lucas Druart | Yannick Estève | Benoît Favre | Lina M Rojas | Valentin Vielzeuf
Proceedings of the Eleventh Dialog System Technology Challenge
Léo Jacqmin | Lucas Druart | Yannick Estève | Benoît Favre | Lina M Rojas | Valentin Vielzeuf
Proceedings of the Eleventh Dialog System Technology Challenge
Though Dialogue State Tracking (DST) is a core component of spoken dialogue systems, recent work on this task mostly deals with chat corpora, disregarding the discrepancies between spoken and written language. In this paper, we propose OLISIA, a cascade system which integrates an Automatic Speech Recognition (ASR) model and a DST model. We introduce several adaptations in the ASR and DST modules to improve integration and robustness to spoken conversations. With these adaptations, our system ranked first in DSTC11 Track 3, a benchmark to evaluate spoken DST. We conduct an in-depth analysis of the results and find that normalizing the ASR outputs and adapting the DST inputs through data augmentation, along with increasing the pre-trained models size all play an important role in reducing the performance discrepancy between written and spoken conversations.