2020
abs
Large Vocabulary Read Speech Corpora for Four Ethiopian Languages: Amharic, Tigrigna, Oromo, and Wolaytta
Solomon Teferra Abate
|
Martha Yifiru Tachbelie
|
Michael Melese
|
Hafte Abera
|
Tewodros Gebreselassie
|
Wondwossen Mulugeta
|
Yaregal Assabie
|
Million Meshesha Beyene
|
Solomon Atinafu
|
Binyam Ephrem Seyoum
Proceedings of the Fourth Widening Natural Language Processing Workshop
Automatic Speech Recognition (ASR) is one of the most important technologies to help people live a better life in the 21st century. However, its development requires a big speech corpus for a language. The development of such a corpus is expensive especially for under-resourced Ethiopian languages. To address this problem we have developed four medium-sized (longer than 22 hours each) speech corpora for four Ethiopian languages: Amharic, Tigrigna, Oromo, and Wolaytta. In a way of checking the usability of the corpora and deliver a baseline ASR for each language. In this paper, we present the corpora and the baseline ASR systems for each language. The word error rates (WERs) we achieved show that the corpora are usable for further investigation and we recommend the collection of text corpora to train strong language models for Oromo and Wolaytta compared to others.
abs
A Translation-Based Approach to Morphology Learning for Low Resource Languages
Tewodros Gebreselassie
|
Amanuel Mersha
|
Michael Gasser
Proceedings of the Fourth Widening Natural Language Processing Workshop
“Low resource languages” usually refers to languages that lack corpora and basic tools such as part-of-speech taggers. But a significant number of such languages do benefit from the availability of relatively complex linguistic descriptions of phonology, morphology, and syntax, as well as dictionaries. A further category, probably the majority of the world’s languages, suffers from the lack of even these resources. In this paper, we investigate the possibility of learning the morphology of such a language by relying on its close relationship to a language with more resources. Specifically, we use a transfer-based approach to learn the morphology of the severely under-resourced language Gofa, starting with a neural morphological generator for the closely related language, Wolaytta. Both languages are members of the Omotic family, spoken and southwestern Ethiopia, and, like other Omotic languages, both are morphologically complex. We first create a finite- state transducer for morphological analysis and generation for Wolaytta, based on relatively complete linguistic descriptions and lexicons for the language. Next, we train an encoder-decoder neural network on the task of morphological generation for Wolaytta, using data generated by the FST. Such a network takes a root and a set of grammatical features as input and generates a word form as output. We then elicit Gofa translations of a small set of Wolaytta words from bilingual speakers. Finally, we retrain the decoder of the Wolaytta network, using a small set of Gofa target words that are translations of the Wolaytta outputs of the original network. The evaluation shows that the transfer network performs better than a separate encoder-decoder network trained on a larger set of Gofa words. We conclude with implications for the learning of morphology for severely under-resourced languages in regions where there are related languages with more resources.
2019
abs
English-Ethiopian Languages Statistical Machine Translation
Solomon Teferra Abate
|
Michael Melese
|
Martha Yifiru Tachbelie
|
Million Meshesha
|
Solomon Atinafu
|
Wondwossen Mulugeta
|
Yaregal Assabie
|
Hafte Abera
|
Biniyam Ephrem
|
Tewodros Gebreselassie
|
Wondimagegnhue Tsegaye Tufa
|
Amanuel Lemma
|
Tsegaye Andargie
|
Seifedin Shifaw
Proceedings of the 2019 Workshop on Widening NLP
In this paper, we describe an attempt towards the development of parallel corpora for English and Ethiopian Languages, such as Amharic, Tigrigna, Afan-Oromo, Wolaytta and Ge’ez. The corpora are used for conducting bi-directional SMT experiments. The BLEU scores of the bi-directional SMT systems show a promising result. The morphological richness of the Ethiopian languages has a great impact on the performance of SMT especially when the targets are Ethiopian languages.