On the Robust Approximation of ASR Metrics

Abdul Waheed, Hanin Atwany, Rita Singh, Bhiksha Raj


Abstract
Recent advances in speech foundation models are largely driven by scaling both model size and data, enabling them to perform a wide range of tasks, including speech recognition. Traditionally, ASR models are evaluated using metrics like Word Error Rate (WER) and Character Error Rate (CER), which depend on ground truth labels. As a result of limited labeled data from diverse domains and testing conditions, the true generalization capabilities of these models beyond standard benchmarks remain unclear. Moreover, labeling data is both costly and time-consuming. To address this, we propose a novel label-free approach for approximating ASR performance metrics, eliminating the need for ground truth labels. Our method utilizes multimodal embeddings in a unified space for speech and transcription representations, combined with a high-quality proxy model to compute proxy metrics. These features are used to train a regression model to predict key ASR metrics like Word Error Rate (WER) and Character Error Rate (CER). We experiment with over 40 models across 14 datasets representing both standard and in-the-wild testing conditions. Our results show that we approximate the metrics within a single-digit absolute difference across all experimental configurations, outperforming the most recent baseline by more than 50%.
Anthology ID:
2025.findings-acl.1187
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
23119–23146
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-acl.1187/
DOI:
Bibkey:
Cite (ACL):
Abdul Waheed, Hanin Atwany, Rita Singh, and Bhiksha Raj. 2025. On the Robust Approximation of ASR Metrics. In Findings of the Association for Computational Linguistics: ACL 2025, pages 23119–23146, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
On the Robust Approximation of ASR Metrics (Waheed et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-acl.1187.pdf