Nikita Savushkin


2025

pdf bib
GigaChat Family: Efficient Russian Language Modeling Through Mixture of Experts Architecture
Valentin Mamedov | Evgenii Kosarev | Gregory Leleytner | Ilya Shchuckin | Valeriy Berezovskiy | Daniil Smirnov | Dmitry Kozlov | Sergei Averkiev | Lukyanenko Ivan | Aleksandr Proshunin | Ainur Israfilova | Ivan Baskov | Artem Chervyakov | Emil Shakirov | Mikhail Kolesov | Daria Khomich | Daria Latortseva | Sergei Porkhun | Yury Fedorov | Oleg Kutuzov | Polina Kudriavtseva | Sofiia Soldatova | Kolodin Egor | Stanislav Pyatkin | Dzmitry Menshykh | Grafov Sergei IUrevich | Eldar Damirov | Vladimir Karlov | Ruslan Gaitukiev | Arkadiy Shatenov | Alena Fenogenova | Nikita Savushkin | Fedor Minkin
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)

Generative large language models (LLMs) have become crucial for modern NLP research and applications across various languages. However, the development of foundational models specifically tailored to the Russian language has been limited, primarily due to the significant computational resources required. This paper introduces the GigaChat family of Russian LLMs, available in various sizes, including base models and instruction-tuned versions. We provide a detailed report on the model architecture, pre-training process, and experiments to guide design choices. In addition, we evaluate their performance on Russian and English benchmarks and compare GigaChat with multilingual analogs. The paper presents a system demonstration of the top-performing models accessible via an API, a Telegram bot, and a Web interface. Furthermore, we have released three open GigaChat models in open-source, aiming to expand NLP research opportunities and support the development of industrial solutions for the Russian language.

2024

pdf bib
MERA: A Comprehensive LLM Evaluation in Russian
Alena Fenogenova | Artem Chervyakov | Nikita Martynov | Anastasia Kozlova | Maria Tikhonova | Albina Akhmetgareeva | Anton Emelyanov | Denis Shevelev | Pavel Lebedev | Leonid Sinev | Ulyana Isaeva | Katerina Kolomeytseva | Daniil Moskovskiy | Elizaveta Goncharova | Nikita Savushkin | Polina Mikhailova | Anastasia Minaeva | Denis Dimitrov | Alexander Panchenko | Sergey Markov
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Over the past few years, one of the most notable advancements in AI research has been in foundation models (FMs), headlined by the rise of language models (LMs). However, despite researchers’ attention and the rapid growth in LM application, the capabilities, limitations, and associated risks still need to be better understood. To address these issues, we introduce a new instruction benchmark, MERA, oriented towards the FMs’ performance on the Russian language. The benchmark encompasses 21 evaluation tasks for generative models covering 10 skills and is supplied with private answer scoring to prevent data leakage. The paper introduces a methodology to evaluate FMs and LMs in fixed zero- and few-shot instruction settings that can be extended to other modalities. We propose an evaluation methodology, an open-source code base for the MERA assessment, and a leaderboard with a submission system. We evaluate open LMs as baselines and find they are still far behind the human level. We publicly release MERA to guide forthcoming research, anticipate groundbreaking model features, standardize the evaluation procedure, and address potential ethical concerns and drawbacks.