SYSTRAN @ IWSLT 2025 Low-resource track

Marko Avila, Josep Crego


Abstract
SYSTRAN submitted systems for one language pair in the 2025 Low-Resource Language Track. Our main contribution lies in the tight coupling and light fine-tuning of an ASR encoder (Whisper) with a neural machine translation decoder (NLLB), forming an efficient speech translation pipeline. We present the modeling strategies and optimizations implemented to build a system that, unlike large-scale end-to-end models, performs effectively under constraints of limited training data and computational resources. This approach enables the development of high-quality speech translation in low-resource settings, while ensuring both efficiency and scalability. We also conduct a comparative analysis of our proposed system against various paradigms, including a cascaded Whisper+NLLB setup and direct end-to-end fine-tuning of Whisper.
Anthology ID:
2025.iwslt-1.33
Volume:
Proceedings of the 22nd International Conference on Spoken Language Translation (IWSLT 2025)
Month:
July
Year:
2025
Address:
Vienna, Austria (in-person and online)
Editors:
Elizabeth Salesky, Marcello Federico, Antonis Anastasopoulos
Venues:
IWSLT | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
324–332
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.iwslt-1.33/
DOI:
Bibkey:
Cite (ACL):
Marko Avila and Josep Crego. 2025. SYSTRAN @ IWSLT 2025 Low-resource track. In Proceedings of the 22nd International Conference on Spoken Language Translation (IWSLT 2025), pages 324–332, Vienna, Austria (in-person and online). Association for Computational Linguistics.
Cite (Informal):
SYSTRAN @ IWSLT 2025 Low-resource track (Avila & Crego, IWSLT 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.iwslt-1.33.pdf