Melanie Galea


2024

pdf
UM IWSLT 2024 Low-Resource Speech Translation: Combining Maltese and North Levantine Arabic
Sara Nabhani | Aiden Williams | Miftahul Jannat | Kate Rebecca Belcher | Melanie Galea | Anna Taylor | Kurt Micallef | Claudia Borg
Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024)

The IWSLT low-resource track encourages innovation in the field of speech translation, particularly in data-scarce conditions. This paper details our submission for the IWSLT 2024 low-resource track shared task for Maltese-English and North Levantine Arabic-English spoken language translation using an unconstrained pipeline approach. Using language models, we improve ASR performance by correcting the produced output. We present a 2 step approach for MT using data from external sources showing improvements over baseline systems. We also explore transliteration as a means to further augment MT data and exploit the cross-lingual similarities between Maltese and Arabic.

pdf
UOM-Constrained IWSLT 2024 Shared Task Submission - Maltese Speech Translation
Kurt Abela | Md Abdur Razzaq Riyadh | Melanie Galea | Alana Busuttil | Roman Kovalev | Aiden Williams | Claudia Borg
Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024)

This paper presents our IWSLT-2024 shared task submission on the low-resource track. This submission forms part of the constrained setup; implying limited data for training. Following the introduction, this paper consists of a literature review defining previous approaches to speech translation, as well as their application to Maltese, followed by the defined methodology, evaluation and results, and the conclusion. A cascaded submission on the Maltese to English language pair is presented; consisting of a pipeline containing: a DeepSpeech 1 Automatic Speech Recognition (ASR) system, a KenLM model to optimise the transcriptions, and finally an LSTM machine translation model. The submission achieves a 0.5 BLEU score on the overall test set, and the ASR system achieves a word error rate of 97.15%. Our code is made publicly available.