Xianghui He
2024
Pause-Aware Automatic Dubbing using LLM and Voice Cloning
Yuang Li
|
Jiaxin Guo
|
Min Zhang
|
Ma Miaomiao
|
Zhiqiang Rao
|
Weidong Zhang
|
Xianghui He
|
Daimeng Wei
|
Hao Yang
Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024)
Automatic dubbing aims to translate the speech of a video into another language, ensuring the new speech naturally fits the original video. This paper details Huawei Translation Services Center’s (HW-TSC) submission for IWSLT 2024’s automatic dubbing task, under an unconstrained setting. Our system’s machine translation (MT) component utilizes a Transformer-based MT model and an LLM-based post-editor to produce translations of varying lengths. The text-to-speech (TTS) component employs a VITS-based TTS model and a voice cloning module to emulate the original speaker’s vocal timbre. For enhanced dubbing synchrony, we introduce a parsing-informed pause selector. Finally, we rerank multiple results based on lip-sync error distance (LSE-D) and character error rate (CER). Our system achieves LSE-D of 10.75 and 12.19 on subset1 and subset2 of DE-EN test sets respectively, superior to last year’s best system.
2022
HW-TSC’s Submissions to the WMT22 Word-Level Auto Completion Task
Hao Yang
|
Hengchao Shang
|
Zongyao Li
|
Daimeng Wei
|
Xianghui He
|
Xiaoyu Chen
|
Zhengzhe Yu
|
Jiaxin Guo
|
Jinlong Yang
|
Shaojun Li
|
Yuanchang Luo
|
Yuhao Xie
|
Lizhi Lei
|
Ying Qin
Proceedings of the Seventh Conference on Machine Translation (WMT)
This paper presents the submissions of Huawei Translation Services Center (HW-TSC) to WMT 2022 Word-Level AutoCompletion Task. We propose an end-to-end autoregressive model with bi-context based on Transformer to solve current task. The model uses a mixture of subword and character encoding units to realize the joint encoding of human input, the context of the target side and the decoded sequence, which ensures full utilization of information. We uses one model to solve four types of data structures in the task. During training, we try using a machine translation model as the pre-trained model and fine-tune it for the task. We also add BERT-style MLM data at the fine-tuning stage to improve model performance. We participate in zh→en, en→de, and de→en directions and win the first place in all the three tracks. Particularly, we outperform the second place by more than 5% in terms of accuracy on the zh→en and en→de tracks. The result is buttressed by human evaluations as well, demonstrating the effectiveness of our model.
Search
Co-authors
- Hao Yang 2
- Daimeng Wei 2
- Jiaxin Guo 2
- Hengchao Shang 1
- Zongyao Li 1
- show all...