Unifying Streaming and Non-streaming Zipformer-based ASR

Bidisha Sharma, Karthik Pandia D S, Shankar Venkatesan, Jeena J Prakash, Shashi Kumar, Malolan Chetlur, Andreas Stolcke


Abstract
There has been increasing interest in unifying streaming and non-streaming automatic speech recognition (ASR) models to reduce development, training, and deployment costs. We present a unified framework that trains a single end-to-end ASR model for both streaming and non-streaming applications, leveraging future context information. We propose to use dynamic right-context through the chunked attention masking in the training of zipformer-based ASR models. We demonstrate that using right-context is more effective in zipformer models compared to other conformer models due to its multi-scale nature. We analyze the effect of varying the number of right-context frames on accuracy and latency of the streaming ASR models. We use Librispeech and large in-house conversational datasets to train different versions of streaming and non-streaming models and evaluate them in a production grade server-client setup across diverse testsets of different domains. The proposed strategy reduces word error by relative 7.9% with a small degradation in user-perceived latency. By adding more right-context frames, we are able to achieve streaming performance close to that of non-streaming models. Our approach also allows flexible control of the latency-accuracy tradeoff according to customers requirements.
Anthology ID:
2025.acl-industry.87
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Georg Rehm, Yunyao Li
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1254–1262
Language:
URL:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.acl-industry.87/
DOI:
Bibkey:
Cite (ACL):
Bidisha Sharma, Karthik Pandia D S, Shankar Venkatesan, Jeena J Prakash, Shashi Kumar, Malolan Chetlur, and Andreas Stolcke. 2025. Unifying Streaming and Non-streaming Zipformer-based ASR. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track), pages 1254–1262, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Unifying Streaming and Non-streaming Zipformer-based ASR (Sharma et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.acl-industry.87.pdf