Nicholas Howell


2020

pdf bib
An Unsupervised Method for Weighting Finite-state Morphological Analyzers
Amr Keleg | Francis M. Tyers | Nicholas Howell | Tommi A. Pirinen
Proceedings of the Twelfth Language Resources and Evaluation Conference

Morphological analysis is one of the tasks that have been studied for years. Different techniques have been used to develop models for performing morphological analysis. Models based on finite state transducers have proved to be more suitable for languages with low available resources. In this paper, we have developed a method for weighting a morphological analyzer built using finite state transducers in order to disambiguate its results. The method is based on a word2vec model that is trained in a completely unsupervised way using raw untagged corpora and is able to capture the semantic meaning of the words. Most of the methods used for disambiguating the results of a morphological analyzer relied on having tagged corpora that need to manually built. Additionally, the method developed uses information about the token irrespective of its context unlike most of the other techniques that heavily rely on the word’s context to disambiguate its set of candidate analyses.

pdf bib
Artie Bias Corpus: An Open Dataset for Detecting Demographic Bias in Speech Applications
Josh Meyer | Lindy Rauchenstein | Joshua D. Eisenberg | Nicholas Howell
Proceedings of the Twelfth Language Resources and Evaluation Conference

We describe the creation of the Artie Bias Corpus, an English dataset of expert-validated <audio, transcript> pairs with demographic tags for age, gender, accent. We also release open software which may be used with the Artie Bias Corpus to detect demographic bias in Automatic Speech Recognition systems, and can be extended to other speech technologies. The Artie Bias Corpus is a curated subset of the Mozilla Common Voice corpus, which we release under a Creative Commons CC0 license – the most open and permissive license for data. This article contains information on the criteria used to select and annotate the Artie Bias Corpus in addition to experiments in which we detect and attempt to mitigate bias in end-to-end speech recognition models. We we observe a significant accent bias in our baseline DeepSpeech model, with more accurate transcriptions of US English compared to Indian English. We do not, however, find evidence for a significant gender bias. We then show significant improvements on individual demographic groups from fine-tuning.