2022
pdf
abs
Phoneme transcription of endangered languages: an evaluation of recent ASR architectures in the single speaker scenario
Gilles Boulianne
Findings of the Association for Computational Linguistics: ACL 2022
Transcription is often reported as the bottleneck in endangered language documentation, requiring large efforts from scarce speakers and transcribers. In general, automatic speech recognition (ASR) can be accurate enough to accelerate transcription only if trained on large amounts of transcribed data. However, when a single speaker is involved, several studies have reported encouraging results for phonetic transcription even with small amounts of training. Here we expand this body of work on speaker-dependent transcription by comparing four ASR approaches, notably recent transformer and pretrained multilingual models, on a common dataset of 11 languages. To automate data preparation, training and evaluation steps, we also developed a phoneme recognition setup which handles morphologically complex languages and writing systems for which no pronunciation dictionary exists. We find that fine-tuning a multilingual pretrained model yields an average phoneme error rate (PER) of 15% for 6 languages with 99 minutes or less of transcribed data for training. For the 5 languages with between 100 and 192 minutes of training, we achieved a PER of 8.4% or less. These results on a number of varied languages suggest that ASR can now significantly reduce transcription efforts in the speaker-dependent situation common in endangered language work.
pdf
abs
Progress in Multilingual Speech Recognition for Low Resource Languages Kurmanji Kurdish, Cree and Inuktut
Vishwa Gupta
|
Gilles Boulianne
Proceedings of the Thirteenth Language Resources and Evaluation Conference
This contribution presents our efforts to develop the automatic speech recognition (ASR) systems for three low resource languages: Kurmanji Kurdish, Cree and Inuktut. As a first step, we generate multilingual models from acoustic training data from 12 different languages in the hybrid DNN/HMM framework. We explore different strategies for combining the phones from different languages: either keep the phone labels separate for each language or merge the common phones. For Kurmanji Kurdish and Inuktut, keeping the phones separate gives much lower word error rate (WER), while merging phones gives lower WER for Cree. These WER are lower than training the acoustic models separately for each language. We also compare two different DNN architectures: factored time delay neural network (TDNN-F), and bidirectional long short-term memory (BLSTM) acoustic models. The TDNN-F acoustic models give significantly lower WER for Kurmanji Kurdish and Cree, while BLSTM acoustic models give significantly lower WER for Inuktut. We also show that for each language, training multilingual acoustic models by one more epoch with acoustic data from that language reduces the WER significantly. We also added 512-dimensional embedding features from cross-lingual pre-trained wav2vec2.0 XLSR-53 models, but they lead to only a small reduction in WER.
2020
pdf
abs
The Indigenous Languages Technology project at NRC Canada: An empowerment-oriented approach to developing language software
Roland Kuhn
|
Fineen Davis
|
Alain Désilets
|
Eric Joanis
|
Anna Kazantseva
|
Rebecca Knowles
|
Patrick Littell
|
Delaney Lothian
|
Aidan Pine
|
Caroline Running Wolf
|
Eddie Santos
|
Darlene Stewart
|
Gilles Boulianne
|
Vishwa Gupta
|
Brian Maracle Owennatékha
|
Akwiratékha’ Martin
|
Christopher Cox
|
Marie-Odile Junker
|
Olivia Sammons
|
Delasie Torkornoo
|
Nathan Thanyehténhas Brinklow
|
Sara Child
|
Benoît Farley
|
David Huggins-Daines
|
Daisy Rosenblum
|
Heather Souter
Proceedings of the 28th International Conference on Computational Linguistics
This paper surveys the first, three-year phase of a project at the National Research Council of Canada that is developing software to assist Indigenous communities in Canada in preserving their languages and extending their use. The project aimed to work within the empowerment paradigm, where collaboration with communities and fulfillment of their goals is central. Since many of the technologies we developed were in response to community needs, the project ended up as a collection of diverse subprojects, including the creation of a sophisticated framework for building verb conjugators for highly inflectional polysynthetic languages (such as Kanyen’kéha, in the Iroquoian language family), release of what is probably the largest available corpus of sentences in a polysynthetic language (Inuktut) aligned with English sentences and experiments with machine translation (MT) systems trained on this corpus, free online services based on automatic speech recognition (ASR) for easing the transcription bottleneck for recordings of speech in Indigenous languages (and other languages), software for implementing text prediction and read-along audiobooks for Indigenous languages, and several other subprojects.
pdf
abs
Automatic Transcription Challenges for Inuktitut, a Low-Resource Polysynthetic Language
Vishwa Gupta
|
Gilles Boulianne
Proceedings of the Twelfth Language Resources and Evaluation Conference
We introduce the first attempt at automatic speech recognition (ASR) in Inuktitut, as a representative for polysynthetic, low-resource languages, like many of the 900 Indigenous languages spoken in the Americas. As most previous work on Inuktitut, we use texts from parliament proceedings, but in addition we have access to 23 hours of transcribed oral stories. With this corpus, we show that Inuktitut displays a much higher degree of polysynthesis than other agglutinative languages usually considered in ASR, such as Finnish or Turkish. Even with a vocabulary of 1.3 million words derived from proceedings and stories, held-out stories have more than 60% of words out-of-vocabulary. We train bi-directional LSTM acoustic models, then investigate word and subword units, morphemes and syllables, and a deep neural network that finds word boundaries in subword sequences. We show that acoustic decoding using syllables decorated with word boundary markers results in the lowest word error rate.
pdf
abs
Speech Transcription Challenges for Resource Constrained Indigenous Language Cree
Vishwa Gupta
|
Gilles Boulianne
Proceedings of the 1st Joint Workshop on Spoken Language Technologies for Under-resourced languages (SLTU) and Collaboration and Computing for Under-Resourced Languages (CCURL)
Cree is one of the most spoken Indigenous languages in Canada. From a speech recognition perspective, it is a low-resource language, since very little data is available for either acoustic or language modeling. This has prevented development of speech technology that could help revitalize the language. We describe our experiments with available Cree data to improve automatic transcription both in speaker- independent and dependent scenarios. While it was difficult to get low speaker-independent word error rates with only six speakers, we were able to get low word and phoneme error rates in the speaker-dependent scenario. We compare our phoneme recognition with two state-of-the-art open-source phoneme recognition toolkits, which use end-to-end training and sequence-to-sequence modeling. Our phoneme error rate (8.7%) is significantly lower than that achieved by the best of these systems (15.1%). With these systems and varying amounts of transcribed and text data, we show that pre-training on other languages is important for speaker-independent recognition, and even small amounts of additional text-only documents are useful. These results can guide practical language documentation work, when deciding how much transcribed and text data is needed to achieve useful phoneme accuracies.
2009
pdf
Incorporating Knowledge of Source Language Text in a System for Dictation of Document Translations
Aarthi Reddy
|
Richard Rose
|
Hani Safadi
|
Samuel Larkin
|
Gilles Boulianne
Proceedings of Machine Translation Summit XII: Papers
2007
pdf
Real-Time Correction of Closed-Captions
Patrick Cardinal
|
Gilles Boulianne
|
Michel Comeau
|
Maryse Boisvert
Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions
2002
pdf
bib
Disambiguation of Finite-State Transducers
N. Smaili
|
P. Cardinal
|
G. Boulianne
|
P. Dumouchel
COLING 2002: The 19th International Conference on Computational Linguistics
1992
pdf
An A* algorithm for very large vocabulary continuous speech recognition
P. Kenny
|
R. Hollan
|
G. Boulianne
|
H. Garudadri
|
M. Lennig
|
D. O’Shaughnessy
Speech and Natural Language: Proceedings of a Workshop Held at Harriman, New York, February 23-26, 1992