Elys Allesiardo
2026
Forewarned Is Forearmed: When Non-Sequential Embedding Turns into an Anomaly Detector
Elys Allesiardo | Antoine Caubrière | Valentin Vielzeuf
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Elys Allesiardo | Antoine Caubrière | Valentin Vielzeuf
Proceedings of the Fifteenth Language Resources and Evaluation Conference
This paper offers an in-depth analysis of non-sequential multimodal sentence-level embeddings, with a particular focus on the SONAR model. We demonstrate that certain embedding dimensions are sensitive to perturbations and can serve as indicators of decoding anomalies. By leveraging the consistency between successive encoding and decoding, we successfully build an accurate detector. Additionally, we explore modifying specific dimensions of interest to attempt to correct them. This work underscores the importance of understanding and analyzing the embeddings themselves to enhance the reliability of multimodal representations.
SENS-ASR: Semantic Embedding Injection in Neural-transducer for Streaming Automatic Speech Recognition
Youness Dkhissi | Valentin Vielzeuf | Elys Allesiardo | Anthony Larcher
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Youness Dkhissi | Valentin Vielzeuf | Elys Allesiardo | Anthony Larcher
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Many Automatic Speech Recognition (ASR) applications require streaming processing of the audio data. In streaming mode, ASR systems need to start transcribing the input stream before it is complete, i.e., the systems have to process a stream of inputs with a limited (or no) future context. Compared to offline mode, this reduction of the future context degrades the performance of Streaming-ASR systems, especially while working with low-latency constraint. In this work, we present SENS-ASR, an approach to enhance the transcription quality of Streaming-ASR by reinforcing the acoustic information with semantic information. This semantic information is extracted from the available past frame-embeddings by a context module. This module is trained using knowledge distillation from a sentence embedding Language Model fine-tuned on the training dataset transcriptions. Experiments on standard datasets show that SENS-ASR significantly improves the Word Error Rate on small-chunk streaming scenarios.