Albina Khusainova


Automatic Bilingual Phrase Dictionary Construction from GIZA++ Output
Albina Khusainova | Vitaly Romanov | Adil Khan
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022

Modern encoder-decoder based neural machine translation (NMT) models are normally trained on parallel sentences. Hence, they give best results when translating full sentences rather than sentence parts. Thereby, the task of translating commonly used phrases, which often arises for language learners, is not addressed by NMT models. While for high-resourced language pairs human-built phrase dictionaries exist, less-resourced pairs do not have them. We suggest an approach for building such dictionary automatically based on the GIZA++ output and show that it works significantly better than translating phrases with a sentences-trained NMT system.


pdf bib
Hierarchical Transformer for Multilingual Machine Translation
Albina Khusainova | Adil Khan | Adín Ramírez Rivera | Vitaly Romanov
Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects

The choice of parameter sharing strategy in multilingual machine translation models determines how optimally parameter space is used and hence, directly influences ultimate translation quality. Inspired by linguistic trees that show the degree of relatedness between different languages, the new general approach to parameter sharing in multilingual machine translation was suggested recently. The main idea is to use these expert language hierarchies as a basis for multilingual architecture: the closer two languages are, the more parameters they share. In this work, we test this idea using the Transformer architecture and show that despite the success in previous work there are problems inherent to training such hierarchical models. We demonstrate that in case of carefully chosen training strategy the hierarchical architecture can outperform bilingual models and multilingual models with full parameter sharing.


Evaluation of Morphological Embeddings for English and Russian Languages
Vitaly Romanov | Albina Khusainova
Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP

This paper evaluates morphology-based embeddings for English and Russian languages. Despite the interest and introduction of several morphology based word embedding models in the past and acclaimed performance improvements on word similarity and language modeling tasks, in our experiments, we did not observe any stable preference over two of our baseline models - SkipGram and FastText. The performance exhibited by morphological embeddings is the average of the two baselines mentioned above.