Maximum entropy language modeling for Russian ASR

Evgeniy Shin, Sebastian Stüker, Kevin Kilgour, Christian Fügen, Alex Waibel


Abstract
Russian is a challenging language for automatic speech recognition systems due to its rich morphology. This rich morphology stems from Russian’s highly inflectional nature and the frequent use of preand suffixes. Also, Russian has a very free word order, changes in which are used to reflect connotations of the sentences. Dealing with these phenomena is rather difficult for traditional n-gram models. We therefore investigate in this paper the use of a maximum entropy language model for Russian whose features are specifically designed to deal with the inflections in Russian, as well as the loose word order. We combine this with a subword based language model in order to alleviate the problem of large vocabulary sizes necessary for dealing with highly inflecting languages. Applying the maximum entropy language model during re-scoring improves the word error rate of our recognition system by 1.2% absolute, while the use of the sub-word based language model reduces the vocabulary size from 120k to 40k and the OOV rate from 4.8% to 2.1%.
Anthology ID:
2013.iwslt-papers.13
Volume:
Proceedings of the 10th International Workshop on Spoken Language Translation: Papers
Month:
December 5-6
Year:
2013
Address:
Heidelberg, Germany
Editor:
Joy Ying Zhang
Venue:
IWSLT
SIG:
SIGSLT
Publisher:
Note:
Pages:
Language:
URL:
https://aclanthology.org/2013.iwslt-papers.13
DOI:
Bibkey:
Cite (ACL):
Evgeniy Shin, Sebastian Stüker, Kevin Kilgour, Christian Fügen, and Alex Waibel. 2013. Maximum entropy language modeling for Russian ASR. In Proceedings of the 10th International Workshop on Spoken Language Translation: Papers, Heidelberg, Germany.
Cite (Informal):
Maximum entropy language modeling for Russian ASR (Shin et al., IWSLT 2013)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2013.iwslt-papers.13.pdf