Danil Astafurov
2026
Russian Generative Spelling, Punctuation and Capitalization Correction
Nikita Martynov | Danil Astafurov | Ulyana Isaeva | Ivan Vasil'yevich Maksimov | Joqsan Azocar | Dmitrii Kosenko | Alena Fenogenova
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Nikita Martynov | Danil Astafurov | Ulyana Isaeva | Ivan Vasil'yevich Maksimov | Joqsan Azocar | Dmitrii Kosenko | Alena Fenogenova
Proceedings of the Fifteenth Language Resources and Evaluation Conference
This paper presents SAGE, an open-access framework that encloses a set of models specifically designed for the generative correction of spelling, punctuation, and capitalization errors in Russian. The release includes four models, featuring a Russian-English version and a distilled version for easy use and cost-effectiveness. The models are pre-trained using a sequence-to-sequence approach on artificial errors that mimic human mistakes and fine-tuned on annotated multi-domain texts. A set of carefully engineered auxiliary learning objectives is employed during pre-training to enrich the models with additional semantic and syntactic information. Evaluations indicate that SAGE models, despite having a small number of parameters, outperform top-tier multilingual and Russian-specific large language models, including both closed- and open-source options, and are considered state-of-the-art. We release the online demo powered by a single Nvidia A100 80GB GPU as a Web service, which allows to simultaneously test the most advanced SAGE model of 1.7B parameters, its distilled version and the Russian-English SAGE model.
2025
Combining Automated and Manual Data for Effective Downstream Fine-Tuning of Transformers for Low-Resource Language Applications
Ulyana Isaeva | Danil Astafurov | Nikita Martynov
Proceedings of the 1st Joint Workshop on Large Language Models and Structure Modeling (XLLM 2025)
Ulyana Isaeva | Danil Astafurov | Nikita Martynov
Proceedings of the 1st Joint Workshop on Large Language Models and Structure Modeling (XLLM 2025)
This paper addresses the constraints of down-stream applications of pre-trained language models (PLMs) for low-resource languages. These constraints are pre-train data deficiency preventing a low-resource language from being well represented in a PLM and inaccessibility of high-quality task-specific data annotation that limits task learning. We propose to use automatically labeled texts combined with manually annotated data in a two-stage task fine-tuning approach. The experiments revealed that utilizing such methodology combined with vocabulary adaptation may compensate for the absence of a targeted PLM or the deficiency of manually annotated data. The methodology is validated on the morphological tagging task for the Udmurt language. We publish our best model that achieved 93.25% token accuracy on HuggingFace Hub along with the training code1.
2024
A Family of Pretrained Transformer Language Models for Russian
Dmitry Zmitrovich | Aleksandr Abramov | Andrey Kalmykov | Vitaly Kadulin | Maria Tikhonova | Ekaterina Taktasheva | Danil Astafurov | Mark Baushenko | Artem Snegirev | Tatiana Shavrina | Sergei S. Markov | Vladislav Mikhailov | Alena Fenogenova
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Dmitry Zmitrovich | Aleksandr Abramov | Andrey Kalmykov | Vitaly Kadulin | Maria Tikhonova | Ekaterina Taktasheva | Danil Astafurov | Mark Baushenko | Artem Snegirev | Tatiana Shavrina | Sergei S. Markov | Vladislav Mikhailov | Alena Fenogenova
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Transformer language models (LMs) are fundamental to NLP research methodologies and applications in various languages. However, developing such models specifically for the Russian language has received little attention. This paper introduces a collection of 13 Russian Transformer LMs, which spans encoder (ruBERT, ruRoBERTa, ruELECTRA), decoder (ruGPT-3), and encoder-decoder (ruT5, FRED-T5) architectures. We provide a report on the model architecture design and pretraining, and the results of evaluating their generalization abilities on Russian language understanding and generation datasets and benchmarks. By pretraining and releasing these specialized Transformer LMs, we aim to broaden the scope of the NLP research directions and enable the development of industrial solutions for the Russian language.