LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation

Keisuke Kamahori, Jungo Kasai, Noriyuki Kojima, Baris Kasikci


Abstract
Modern automatic speech recognition (ASR) models, such as OpenAI’s Whisper, rely on deep encoder-decoder architectures, and their encoders are a critical bottleneck for efficient deployment due to high computational intensity. We introduce LiteASR, a low-rank compression scheme for ASR encoders that significantly reduces inference costs while maintaining transcription accuracy. Our approach leverages the strong low-rank properties observed in intermediate activations: by applying principal component analysis (PCA) with a small calibration dataset, we approximate linear transformations with a chain of low-rank matrix multiplications, and further optimize self-attention to work in reduced dimensionality. Evaluation results show that our method can compress Whisper large-v3’s encoder size by over 50%, matching Whisper medium’s size with better transcription accuracy, thereby establishing a new Pareto frontier of accuracy and efficiency. The code of LiteASR is available at https://github.com/efeslab/LiteASR.
Anthology ID:
2025.emnlp-main.169
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3430–3442
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.169/
DOI:
Bibkey:
Cite (ACL):
Keisuke Kamahori, Jungo Kasai, Noriyuki Kojima, and Baris Kasikci. 2025. LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 3430–3442, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation (Kamahori et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.169.pdf
Checklist:
 2025.emnlp-main.169.checklist.pdf