Measuring the Effect of Transcription Noise on Downstream Language Understanding Tasks

Ori Shapira, Shlomo Chazan, Amir David Nissan Cohen


Abstract
With the increasing prevalence of recorded human speech, spoken language understanding (SLU) is essential for its efficient processing. In order to process the speech, it is commonly transcribed using automatic speech recognition technology. This speech-to-text transition introduces errors into the transcripts, which subsequently propagate to downstream NLP tasks, such as dialogue summarization. While it is known that transcript noise affects downstream tasks, a general-purpose and systematic approach to analyzing its effects across different noise severities and types has not been addressed. We propose a configurable framework for assessing task models in diverse noisy settings, and for examining the impact of transcript-cleaning techniques. The framework facilitates the investigation of task model behavior, which can in turn support the development of effective SLU solutions. We exemplify the utility of our framework on three SLU tasks and four task models, offering insights regarding the effect of transcript noise on tasks in general and models in particular. For instance, we find that task models can tolerate a certain level of noise, and are affected differently by the types of errors in the transcript.
Anthology ID:
2025.acl-long.1449
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
29978–30004
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1449/
DOI:
Bibkey:
Cite (ACL):
Ori Shapira, Shlomo Chazan, and Amir David Nissan Cohen. 2025. Measuring the Effect of Transcription Noise on Downstream Language Understanding Tasks. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 29978–30004, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Measuring the Effect of Transcription Noise on Downstream Language Understanding Tasks (Shapira et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1449.pdf