Peizhuo Liu
2025
Leveraging Unit Language Guidance to Advance Speech Modeling in Textless Speech-to-Speech Translation
Yuhao Zhang
|
Xiangnan Ma
|
Kaiqi Kou
|
Peizhuo Liu
|
Weiqiao Shan
|
Benyou Wang
|
Tong Xiao
|
Yuxin Huang
|
Zhengtao Yu
|
JingBo Zhu
Findings of the Association for Computational Linguistics: ACL 2025
The success of building textless speech-to-speech translation (S2ST) models has attracted much attention. However, S2ST still faces two main challenges: 1) extracting linguistic features for various speech signals, called cross-modal (CM), and 2) learning alignment of difference languages in long sequences, called cross-lingual (CL). We propose the unit language to overcome the two modeling challenges. The unit language can be considered a text-like representation format, constructed using n-gram language modeling. We implement multi-task learning to utilize the unit language in guiding the speech modeling process. Our initial results reveal a conflict when applying source and target unit languages simultaneously. We propose task prompt modeling to mitigate this conflict. We conduct experiments on four languages of the Voxpupil dataset. Our method demonstrates significant improvements over a strong baseline and achieves performance comparable to models trained with text.
Search
Fix author
Co-authors
- Yuxin Huang (黄于欣, 黄宇欣) 1
- Kaiqi Kou 1
- Xiangnan Ma 1
- Weiqiao Shan 1
- Benyou Wang 1
- show all...