WER we are and WER we think we are

Piotr Szymański, Piotr Żelasko, Mikolaj Morzy, Adrian Szymczak, Marzena Żyła-Hoppe, Joanna Banaszczak, Lukasz Augustyniak, Jan Mizgajski, Yishay Carmiel


Abstract
Natural language processing of conversational speech requires the availability of high-quality transcripts. In this paper, we express our skepticism towards the recent reports of very low Word Error Rates (WERs) achieved by modern Automatic Speech Recognition (ASR) systems on benchmark datasets. We outline several problems with popular benchmarks and compare three state-of-the-art commercial ASR systems on an internal dataset of real-life spontaneous human conversations and HUB’05 public benchmark. We show that WERs are significantly higher than the best reported results. We formulate a set of guidelines which may aid in the creation of real-life, multi-domain datasets with high quality annotations for training and testing of robust ASR systems.
Anthology ID:
2020.findings-emnlp.295
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3290–3295
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.295
DOI:
10.18653/v1/2020.findings-emnlp.295
Bibkey:
Cite (ACL):
Piotr Szymański, Piotr Żelasko, Mikolaj Morzy, Adrian Szymczak, Marzena Żyła-Hoppe, Joanna Banaszczak, Lukasz Augustyniak, Jan Mizgajski, and Yishay Carmiel. 2020. WER we are and WER we think we are. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3290–3295, Online. Association for Computational Linguistics.
Cite (Informal):
WER we are and WER we think we are (Szymański et al., Findings 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2020.findings-emnlp.295.pdf
Video:
 https://slideslive.com/38940634
Data
LibriSpeechTED-LIUM