2025
pdf
bib
abs
BeaverTalk: Oregon State University’s IWSLT 2025 Simultaneous Speech Translation System
Matthew Raffel
|
Victor Agostinelli III
|
Lizhong Chen
Proceedings of the 22nd International Conference on Spoken Language Translation (IWSLT 2025)
This paper discusses the construction, fine-tuning, and deployment of BeaverTalk, a cascaded system for speech-to-text translation as part of the IWSLT 2025 simultaneous translation task. The system architecture employs a VAD segmenter for breaking a speech stream into segments, Whisper Large V2 for automatic speech recognition (ASR), and Gemma 3 12B for simultaneous translation. Regarding the simultaneous translation LLM, it is fine-tuned via low-rank adaptors (LoRAs) for a conversational prompting strategy that leverages a single prior-sentence memory bank from the source language as context. The cascaded system participated in the English-German and English-Chinese language directions for both the low and high latency regimes. In particular, on the English-German task, the system achieves a BLEU of 24.64 and 27.83 at a StreamLAAL of 1837.86 and 3343.73, respectively. Then, on the English-Chinese task, the system achieves a BLEU of 34.07 and 37.23 at a StreamLAAL of 2216.99 and 3521.35, respectively.
pdf
bib
abs
Findings of the IWSLT 2025 Evaluation Campaign
Victor Agostinelli
|
Tanel Alumäe
|
Antonios Anastasopoulos
|
Luisa Bentivogli
|
Ondřej Bojar
|
Claudia Borg
|
Fethi Bougares
|
Roldano Cattoni
|
Mauro Cettolo
|
Lizhong Chen
|
William Chen
|
Raj Dabre
|
Yannick Estève
|
Marcello Federico
|
Mark Fishel
|
Marco Gaido
|
Dávid Javorský
|
Marek Kasztelnik
|
Fortuné Kponou
|
Mateusz Krubiński
|
Tsz Kin Lam
|
Danni Liu
|
Evgeny Matusov
|
Chandresh Kumar Maurya
|
John P. McCrae
|
Salima Mdhaffar
|
Yasmin Moslem
|
Kenton Murray
|
Satoshi Nakamura
|
Matteo Negri
|
Jan Niehues
|
Atul Kr. Ojha
|
John E. Ortega
|
Sara Papi
|
Pavel Pecina
|
Peter Polák
|
Piotr Połeć
|
Ashwin Sankar
|
Beatrice Savoldi
|
Nivedita Sethiya
|
Claytone Sikasote
|
Matthias Sperber
|
Sebastian Stüker
|
Katsuhito Sudoh
|
Brian Thompson
|
Marco Turchi
|
Alex Waibel
|
Patrick Wilken
|
Rodolfo Zevallos
|
Vilém Zouhar
|
Maike Züfle
Proceedings of the 22nd International Conference on Spoken Language Translation (IWSLT 2025)
This paper presents the outcomes of the shared tasks conducted at the 22nd International Workshop on Spoken Language Translation (IWSLT). The workshop addressed seven critical challenges in spoken language translation: simultaneous and offline translation, automatic subtitling and dubbing, model compression, speech-to-speech translation, dialect and low-resource speech translation, and Indic languages. The shared tasks garnered significant participation, with 32 teams submitting their runs. The field’s growing importance is reflected in the increasing diversity of shared task organizers and contributors to this overview paper, representing a balanced mix of industrial and academic institutions. This broad participation demonstrates the rising prominence of spoken language translation in both research and practical applications.
2024
pdf
bib
abs
Simul-LLM: A Framework for Exploring High-Quality Simultaneous Translation with Large Language Models
Victor Agostinelli
|
Max Wild
|
Matthew Raffel
|
Kazi Fuad
|
Lizhong Chen
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) with billions of parameters and pretrained on massive amounts of data are now capable of near or better than state-of-the-art performance in a variety of downstream natural language processing tasks. Neural machine translation (NMT) is one such task that LLMs have been applied to with great success. However, little research has focused on applying LLMs to the more difficult subset of NMT called simultaneous translation (SimulMT), where translation begins before the entire source context is available to the model. In this paper, we address key challenges facing LLMs fine-tuned for SimulMT, validate classical SimulMT concepts and practices in the context of LLMs, explore adapting LLMs that are fine-tuned for NMT to the task of SimulMT, and introduce Simul-LLM, the first open-source fine-tuning and evaluation pipeline development framework for LLMs focused on SimulMT.
pdf
bib
abs
Simultaneous Masking, Not Prompting Optimization: A Paradigm Shift in Fine-tuning LLMs for Simultaneous Translation
Matthew Raffel
|
Victor Agostinelli
|
Lizhong Chen
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) have achieved state-of-the-art performance in various language processing tasks, motivating their adoption in simultaneous translation. Current fine-tuning methods to adapt LLMs for simultaneous translation focus on prompting optimization strategies using either data augmentation or prompt structure modifications. However, these methods suffer from several issues, such as unnecessarily expanded training sets, computational inefficiency from dumping the key and value cache, increased prompt sizes, or restriction to a single decision policy. To eliminate these issues, in this work, we propose SimulMask, a new paradigm for fine-tuning LLMs for simultaneous translation. It utilizes a novel attention mask approach that models simultaneous translation during fine-tuning by masking attention for a desired decision policy. Applying the proposed SimulMask on a Falcon LLM for the IWSLT 2017 dataset, we have observed a significant translation quality improvement compared to state-of-the-art prompting optimization strategies on five language pairs while reducing the computational cost.
2023
pdf
bib
abs
Implicit Memory Transformer for Computationally Efficient Simultaneous Speech Translation
Matthew Raffel
|
Lizhong Chen
Findings of the Association for Computational Linguistics: ACL 2023
Simultaneous speech translation is an essential communication task difficult for humans whereby a translation is generated concurrently with oncoming speech inputs. For such a streaming task, transformers using block processing to break an input sequence into segments have achieved state-of-the-art performance at a reduced cost. Current methods to allow information to propagate across segments, including left context and memory banks, have faltered as they are both insufficient representations and unnecessarily expensive to compute. In this paper, we propose an Implicit Memory Transformer that implicitly retains memory through a new left context method, removing the need to explicitly represent memory with memory banks. We generate the left context from the attention output of the previous segment and include it in the keys and values of the current segment’s attention calculation. Experiments on the MuST-C dataset show that the Implicit Memory Transformer provides a substantial speedup on the encoder forward pass with nearly identical translation quality when compared with the state-of-the-art approach that employs both left context and memory banks.