2025
pdf
bib
abs
Autoregressive Speech Synthesis without Vector Quantization
Lingwei Meng
|
Long Zhou
|
Shujie Liu
|
Sanyuan Chen
|
Bing Han
|
Shujie Hu
|
Yanqing Liu
|
Jinyu Li
|
Sheng Zhao
|
Xixin Wu
|
Helen M. Meng
|
Furu Wei
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
We present MELLE, a novel continuous-valued token based language modeling approach for text-to-speech synthesis (TTS). MELLE autoregressively generates continuous mel-spectrogram frames directly from text condition, bypassing the need for vector quantization, which is typically designed for audio compression and sacrifices fidelity compared to continuous representations. Specifically, (i) instead of cross-entropy loss, we apply regression loss with a proposed spectrogram flux loss function to model the probability distribution of the continuous-valued tokens; (ii) we have incorporated variational inference into MELLE to facilitate sampling mechanisms, thereby enhancing the output diversity and model robustness. Experiments demonstrate that, compared to the two-stage codec language model VALL-E and its variants, the single-stage MELLE mitigates robustness issues by avoiding the inherent flaws of sampling vector-quantized codes, achieves superior performance across multiple metrics, and, most importantly, offers a more streamlined paradigm. The demos of our work are provided at https://aka.ms/melle.
pdf
bib
abs
SLAM-Omni: Timbre-Controllable Voice Interaction System with Single-Stage Training
Wenxi Chen
|
Ziyang Ma
|
Ruiqi Yan
|
Yuzhe Liang
|
Xiquan Li
|
Ruiyang Xu
|
Zhikang Niu
|
Yanqiao Zhu
|
Yifan Yang
|
Zhanxun Liu
|
Kai Yu
|
Yuxuan Hu
|
Jinyu Li
|
Yan Lu
|
Shujie Liu
|
Xie Chen
Findings of the Association for Computational Linguistics: ACL 2025
Recent advancements highlight the potential of end-to-end real-time spoken dialogue systems, showcasing their low latency and high quality. In this paper, we introduce SLAM-Omni, a timbre-controllable, end-to-end voice interaction system with single-stage training. SLAM-Omni achieves zero-shot timbre control by modeling spoken language with semantic tokens and decoupling speaker information to a vocoder. By predicting grouped speech semantic tokens at each step, our method significantly reduces the sequence length of audio tokens, accelerating both training and inference. Additionally, we propose historical text prompting to compress dialogue history, facilitating efficient multi-round interactions. Comprehensive evaluations reveal that SLAM-Omni outperforms prior models of similar scale, requiring only 15 hours of training on 4 GPUs with limited data. Notably, it is the first spoken dialogue system to achieve competitive performance with a single-stage training approach, eliminating the need for pre-training on TTS or ASR tasks. Further experiments validate its multilingual and multi-turn dialogue capabilities on larger datasets.
pdf
bib
abs
Failing Forward: Improving Generative Error Correction for ASR with Synthetic Data and Retrieval Augmentation
Sreyan Ghosh
|
Mohammad Sadegh Rasooli
|
Michael Levit
|
Peidong Wang
|
Jian Xue
|
Dinesh Manocha
|
Jinyu Li
Findings of the Association for Computational Linguistics: ACL 2025
Generative Error Correction (GEC) has emerged as a powerful post-processing method to boost the performance of Automatic Speech Recognition (ASR) systems. In this paper, we first show that GEC models struggle to generalize beyond the specific types of errors encountered during training, limiting their ability to correct new, unseen errors at test time, particularly in out-of-domain (OOD) scenarios. This phenomenon amplifies with named entities (NEs), where, in addition to insufficient contextual information or knowledge about the NEs, novel NEs keep emerging. To address these issues, we propose DARAG (Data- and Retrieval-Augmented Generative Error Correction), a novel approach designed to improve GEC for ASR in in-domain (ID) and OOD scenarios. First, we augment the GEC training dataset with synthetic data generated using foundational generative models, thereby simulating additional errors from which the model can learn from. For out-of-domain scenarios, we simulate test-time errors from new domains similarly and in an unsupervised fashion. Additionally, to better handle NEs, we introduce retrieval-augmented correction wherein we augment the model input with entities retrieved from a datastore of NEs. Our approach is simple, scalable, and both domain- and language-agnostic. We experiment on multiple datasets and settings, showing that DARAG outperforms all our baselines, achieving 8%–30% relative WER improvements in ID and 10%–33% improvements in OOD settings.
2024
pdf
bib
abs
WavLLM: Towards Robust and Adaptive Speech Large Language Model
Shujie Hu
|
Long Zhou
|
Shujie Liu
|
Sanyuan Chen
|
Lingwei Meng
|
Hongkun Hao
|
Jing Pan
|
Xunying Liu
|
Jinyu Li
|
Sunit Sivasankaran
|
Linquan Liu
|
Furu Wei
Findings of the Association for Computational Linguistics: EMNLP 2024
Recent advancements in large language models (LLMs) have expanded their scope in natural language processing (NLP) to encompass multimodal functions. However, integrating listening capabilities effectively remains a significant challenge for generalization and complex auditory task execution. In this work, we introduce WavLLM, a robust and adaptive speech large language model featuring dual encoders—a Whisper encoder for semantics and a WavLM encoder for speaker characteristics. Within the two-stage curriculum learning framework, WavLLM first builds its foundational capabilities by optimizing on mixed elementary single tasks, followed by advanced multi-task training on more complex tasks such as combinations of the elementary tasks. To enhance the flexibility and adherence to different tasks and instructions, a prompt-aware LoRA weight adapter is introduced in the second advanced multi-task training stage. We validate the proposed model on universal speech benchmarks and also apply it to specialized speech-question-answer (SQA) dataset, and speech Chain-of-Thought (CoT) evaluation set. Experiments demonstrate that the proposed model achieves state-of-the-art performance across a range of speech tasks on the same model size, exhibiting robust generalization capabilities in executing complex tasks using CoT approach. The codes, models, audio samples, and SQA evaluation set can be accessed at
https://github.com/microsoft/SpeechT5/tree/main/WavLLM.
2022
pdf
bib
abs
SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing
Junyi Ao
|
Rui Wang
|
Long Zhou
|
Chengyi Wang
|
Shuo Ren
|
Yu Wu
|
Shujie Liu
|
Tom Ko
|
Qing Li
|
Yu Zhang
|
Zhihua Wei
|
Yao Qian
|
Jinyu Li
|
Furu Wei
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Motivated by the success of T5 (Text-To-Text Transfer Transformer) in pre-trained natural language processing models, we propose a unified-modal SpeechT5 framework that explores the encoder-decoder pre-training for self-supervised speech/text representation learning. The SpeechT5 framework consists of a shared encoder-decoder network and six modal-specific (speech/text) pre/post-nets. After preprocessing the input speech/text through the pre-nets, the shared encoder-decoder network models the sequence-to-sequence transformation, and then the post-nets generate the output in the speech/text modality based on the output of the decoder. Leveraging large-scale unlabeled speech and text data, we pre-train SpeechT5 to learn a unified-modal representation, hoping to improve the modeling capability for both speech and text. To align the textual and speech information into this unified semantic space, we propose a cross-modal vector quantization approach that randomly mixes up speech/text states with latent units as the interface between encoder and decoder. Extensive evaluations show the superiority of the proposed SpeechT5 framework on a wide variety of spoken language processing tasks, including automatic speech recognition, speech synthesis, speech translation, voice conversion, speech enhancement, and speaker identification.
pdf
bib
abs
SpeechUT: Bridging Speech and Text with Hidden-Unit for Encoder-Decoder Based Speech-Text Pre-training
Ziqiang Zhang
|
Long Zhou
|
Junyi Ao
|
Shujie Liu
|
Lirong Dai
|
Jinyu Li
|
Furu Wei
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
The rapid development of single-modal pre-training has prompted researchers to pay more attention to cross-modal pre-training methods. In this paper, we propose a unified-modal speech-unit-text pre-training model, SpeechUT, to connect the representations of a speech encoder and a text decoder with a shared unit encoder. Leveraging hidden-unit as an interface to align speech and text, we can decompose the speech-to-text model into a speech-to-unit model and a unit-to-text model, which can be jointly pre-trained with unpaired speech and text data respectively. Our proposed SpeechUT is fine-tuned and evaluated on automatic speech recognition (ASR) and speech translation (ST) tasks. Experimental results show that SpeechUT gets substantial improvements over strong baselines, and achieves state-of-the-art performance on both the LibriSpeech ASR and MuST-C ST tasks. To better understand the proposed SpeechUT, detailed analyses are conducted. The code and pre-trained models are available at https://aka.ms/SpeechUT.
2020
pdf
bib
abs
Production de la parole en réponse à de multiples perturbations du feedback auditif (Speech production in response to multiple perturbations of auditory feedback)
Jinyu Li
|
Leonardo Lancia
Actes de la 6e conférence conjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition). Volume 1 : Journées d'Études sur la Parole
Des études antérieures ont montré que la production de la parole dépend des conditions du feedback auditif. Cette étude vise à investiguer les interactions entre les effets de trois facteurs différents sur la production de la parole : 1) le retard du feedback auditif (DAF), 2) le décalage de la f0 du feedback auditif et 3) la complexité des syllabes qui composent les énoncés. Nous avons manipulé le feedback auditif de 20 locutrices francophones pendant la répétition de trois phrases. Nous avons pu observer que plus de la moitié des participantes ont tendance à suivre la direction du décalage de la f0 du feedback auditif en recevant en continu cette perturbation. La position syllabique des voyelles est un facteur important affectant l’effet des perturbations du feedback auditif. Cependant les analyses décrites dans cette étude ne montrent pas un effet interactif du DAF et du décalage de la f0 sur la production de la parole.