GMU Systems for the IWSLT 2025 Low-Resource Speech Translation Shared Task

Chutong Meng, Antonios Anastasopoulos


Abstract
This paper describes the GMU systems for the IWSLT 2025 low-resource speech translation shared task. We trained systems for all language pairs, except for Levantine Arabic. We fine-tuned SeamlessM4T-v2 for automatic speech recognition (ASR), machine translation (MT), and end-to-end speech translation (E2E ST). The ASR and MT models are also used to form cascaded ST systems. Additionally, we explored various training paradigms for E2E ST fine-tuning, including direct E2E fine-tuning, multi-task training, and parameter initialization using components from fine-tuned ASR and/or MT models. Our results show that (1) direct E2E fine-tuning yields strong results; (2) initializing with a fine-tuned ASR encoder improves ST performance on languages SeamlessM4T-v2 has not been trained on; (3) multi-task training can be slightly helpful.
Anthology ID:
2025.iwslt-1.29
Volume:
Proceedings of the 22nd International Conference on Spoken Language Translation (IWSLT 2025)
Month:
July
Year:
2025
Address:
Vienna, Austria (in-person and online)
Editors:
Elizabeth Salesky, Marcello Federico, Antonis Anastasopoulos
Venues:
IWSLT | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
289–300
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.iwslt-1.29/
DOI:
Bibkey:
Cite (ACL):
Chutong Meng and Antonios Anastasopoulos. 2025. GMU Systems for the IWSLT 2025 Low-Resource Speech Translation Shared Task. In Proceedings of the 22nd International Conference on Spoken Language Translation (IWSLT 2025), pages 289–300, Vienna, Austria (in-person and online). Association for Computational Linguistics.
Cite (Informal):
GMU Systems for the IWSLT 2025 Low-Resource Speech Translation Shared Task (Meng & Anastasopoulos, IWSLT 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.iwslt-1.29.pdf