LASER: An LLM-based ASR Scoring and Evaluation Rubric

Amruta Parulekar, Preethi Jyothi


Abstract
Standard ASR evaluation metrics like Word Error Rate (WER) tend to unfairly penalize morphological and syntactic nuances that do not significantly alter sentence semantics. We introduce an LLM-based scoring rubric LASER that leverages state-of-the-art LLMs’ in-context learning abilities to learn from prompts with detailed examples. Hindi LASER scores using Gemini 2.5 Pro achieved a very high correlation score of 94% with human annotations. Hindi examples in the prompt were also effective in analyzing errors in other Indian languages such as Marathi, Kannada and Malayalam. We also demonstrate how a smaller LLM like Llama 3 can be finetuned on word-pair examples derived from reference and ASR predictions to predict what kind of penalty should be applied with close to 89% accuracy.
Anthology ID:
2025.emnlp-main.1257
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
24773–24782
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1257/
DOI:
Bibkey:
Cite (ACL):
Amruta Parulekar and Preethi Jyothi. 2025. LASER: An LLM-based ASR Scoring and Evaluation Rubric. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 24773–24782, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
LASER: An LLM-based ASR Scoring and Evaluation Rubric (Parulekar & Jyothi, EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1257.pdf
Checklist:
 2025.emnlp-main.1257.checklist.pdf