Streaming Sequence Transduction through Dynamic Compression

Weiting Tan, Yunmo Chen, Tongfei Chen, Guanghui Qin, Haoran Xu, Chenyu Zhang, Benjamin Van Durme, Philipp Koehn


Abstract
We introduce STAR (Stream Transduction with Anchor Representations), a novel Transformer-based model designed for efficient sequence-to-sequence transduction over streams. STAR dynamically segments input streams to create compressed anchor representations, achieving nearly lossless (12x) compression in Automatic Speech Recognition (ASR) and outperforming existing methods. Moreover, STAR demonstrates superior segmentation and latency-quality trade-offs in simultaneous Speech Translation, optimizing latency, memory footprint, and quality.
Anthology ID:
2025.iwslt-1.1
Volume:
Proceedings of the 22nd International Conference on Spoken Language Translation (IWSLT 2025)
Month:
July
Year:
2025
Address:
Vienna, Austria (in-person and online)
Editors:
Elizabeth Salesky, Marcello Federico, Antonis Anastasopoulos
Venues:
IWSLT | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–18
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.iwslt-1.1/
DOI:
Bibkey:
Cite (ACL):
Weiting Tan, Yunmo Chen, Tongfei Chen, Guanghui Qin, Haoran Xu, Chenyu Zhang, Benjamin Van Durme, and Philipp Koehn. 2025. Streaming Sequence Transduction through Dynamic Compression. In Proceedings of the 22nd International Conference on Spoken Language Translation (IWSLT 2025), pages 1–18, Vienna, Austria (in-person and online). Association for Computational Linguistics.
Cite (Informal):
Streaming Sequence Transduction through Dynamic Compression (Tan et al., IWSLT 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.iwslt-1.1.pdf