Jinyi Yang
2022
JHU IWSLT 2022 Dialect Speech Translation System Description
Jinyi Yang
|
Amir Hussein
|
Matthew Wiesner
|
Sanjeev Khudanpur
Proceedings of the 19th International Conference on Spoken Language Translation (IWSLT 2022)
This paper details the Johns Hopkins speech translation (ST) system used in the IWLST2022 dialect speech translation task. Our system uses a cascade of automatic speech recognition (ASR) and machine translation (MT). We use a Conformer model for ASR systems and a Transformer model for machine translation. Surprisingly, we found that while using additional ASR training data resulted in only a negligible change in performance as measured by BLEU or word error rate (WER), aggressive text normalization improved BLEU more significantly. We also describe an approach, similar to back-translation, for improving performance using synthetic dialectal source text produced from source sentences in mismatched dialects.
Search