Proceedings of the 1st Workshop on Benchmarking: Past, Present and Future
Kenneth Church, Mark Liberman, Valia Kordoni (Editors)
- Anthology ID:
- 2021.bppf-1
- Month:
- Aug
- Year:
- 2021
- Address:
- Online
- Venue:
- BPPF
- SIG:
- Publisher:
- Association for Computational Linguistics
- URL:
- https://aclanthology.org/2021.bppf-1
- DOI:
- PDF:
- https://preview.aclanthology.org/improve-issue-templates/2021.bppf-1.pdf
Proceedings of the 1st Workshop on Benchmarking: Past, Present and Future
Kenneth Church
|
Mark Liberman
|
Valia Kordoni
Benchmarking: Past, Present and Future
Kenneth Church
|
Mark Liberman
|
Valia Kordoni
Where have we been, and where are we going? It is easier to talk about the past than the future. These days, benchmarks evolve more bottom up (such as papers with code). There used to be more top-down leadership from government (and industry, in the case of systems, with benchmarks such as SPEC). Going forward, there may be more top-down leadership from organizations like MLPerf and/or influencers like David Ferrucci, who was responsible for IBM’s success with Jeopardy, and has recently written a paper suggesting how the community should think about benchmarking for machine comprehension. Tasks such as reading comprehension become even more interesting as we move beyond English. Multilinguality introduces many challenges, and even more opportunities.
Guideline Bias in Wizard-of-Oz Dialogues
Victor Petrén Bach Hansen
|
Anders Søgaard
NLP models struggle with generalization due to sampling and annotator bias. This paper focuses on a different kind of bias that has received very little attention: guideline bias, i.e., the bias introduced by how our annotator guidelines are formulated. We examine two recently introduced dialogue datasets, CCPE-M and Taskmaster-1, both collected by trained assistants in a Wizard-of-Oz set-up. For CCPE-M, we show how a simple lexical bias for the word like in the guidelines biases the data collection. This bias, in effect, leads to poor performance on data without this bias: a preference elicitation architecture based on BERT suffers a 5.3% absolute drop in performance, when like is replaced with a synonymous phrase, and a 13.2% drop in performance when evaluated on out-of-sample data. For Taskmaster-1, we show how the order in which instructions are resented, biases the data collection.
We Need to Consider Disagreement in Evaluation
Valerio Basile
|
Michael Fell
|
Tommaso Fornaciari
|
Dirk Hovy
|
Silviu Paun
|
Barbara Plank
|
Massimo Poesio
|
Alexandra Uma
Evaluation is of paramount importance in data-driven research fields such as Natural Language Processing (NLP) and Computer Vision (CV). Current evaluation practice largely hinges on the existence of a single “ground truth” against which we can meaningfully compare the prediction of a model. However, this comparison is flawed for two reasons. 1) In many cases, more than one answer is correct. 2) Even where there is a single answer, disagreement among annotators is ubiquitous, making it difficult to decide on a gold standard. We argue that the current methods of adjudication, agreement, and evaluation need serious reconsideration. Some researchers now propose to minimize disagreement and to fix datasets. We argue that this is a gross oversimplification, and likely to conceal the underlying complexity. Instead, we suggest that we need to better capture the sources of disagreement to improve today’s evaluation practice. We discuss three sources of disagreement: from the annotator, the data, and the context, and show how this affects even seemingly objective tasks. Datasets with multiple annotations are becoming more common, as are methods to integrate disagreement into modeling. The logical next step is to extend this to evaluation.
How Might We Create Better Benchmarks for Speech Recognition?
Alëna Aksënova
|
Daan van Esch
|
James Flynn
|
Pavel Golik
The applications of automatic speech recognition (ASR) systems are proliferating, in part due to recent significant quality improvements. However, as recent work indicates, even state-of-the-art speech recognition systems – some which deliver impressive benchmark results, struggle to generalize across use cases. We review relevant work, and, hoping to inform future benchmark development, outline a taxonomy of speech recognition use cases, proposed for the next generation of ASR benchmarks. We also survey work on metrics, in addition to the de facto standard Word Error Rate (WER) metric, and we introduce a versatile framework designed to describe interactions between linguistic variation and ASR performance metrics.