Eldar Damirov


2025

pdf bib
GigaChat Family: Efficient Russian Language Modeling Through Mixture of Experts Architecture
Valentin Mamedov | Evgenii Kosarev | Gregory Leleytner | Ilya Shchuckin | Valeriy Berezovskiy | Daniil Smirnov | Dmitry Kozlov | Sergei Averkiev | Lukyanenko Ivan | Aleksandr Proshunin | Ainur Israfilova | Ivan Baskov | Artem Chervyakov | Emil Shakirov | Mikhail Kolesov | Daria Khomich | Daria Latortseva | Sergei Porkhun | Yury Fedorov | Oleg Kutuzov | Polina Kudriavtseva | Sofiia Soldatova | Kolodin Egor | Stanislav Pyatkin | Dzmitry Menshykh | Grafov Sergei IUrevich | Eldar Damirov | Vladimir Karlov | Ruslan Gaitukiev | Arkadiy Shatenov | Alena Fenogenova | Nikita Savushkin | Fedor Minkin
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)

Generative large language models (LLMs) have become crucial for modern NLP research and applications across various languages. However, the development of foundational models specifically tailored to the Russian language has been limited, primarily due to the significant computational resources required. This paper introduces the GigaChat family of Russian LLMs, available in various sizes, including base models and instruction-tuned versions. We provide a detailed report on the model architecture, pre-training process, and experiments to guide design choices. In addition, we evaluate their performance on Russian and English benchmarks and compare GigaChat with multilingual analogs. The paper presents a system demonstration of the top-performing models accessible via an API, a Telegram bot, and a Web interface. Furthermore, we have released three open GigaChat models in open-source, aiming to expand NLP research opportunities and support the development of industrial solutions for the Russian language.

2022

pdf bib
Active Learning for Abstractive Text Summarization
Akim Tsvigun | Ivan Lysenko | Danila Sedashov | Ivan Lazichny | Eldar Damirov | Vladimir Karlov | Artemy Belousov | Leonid Sanochkin | Maxim Panov | Alexander Panchenko | Mikhail Burtsev | Artem Shelmanov
Findings of the Association for Computational Linguistics: EMNLP 2022

Construction of human-curated annotated datasets for abstractive text summarization (ATS) is very time-consuming and expensive because creating each instance requires a human annotator to read a long document and compose a shorter summary that would preserve the key information relayed by the original document. Active Learning (AL) is a technique developed to reduce the amount of annotation required to achieve a certain level of machine learning model performance. In information extraction and text classification, AL can reduce the amount of labor up to multiple times. Despite its potential for aiding expensive annotation, as far as we know, there were no effective AL query strategies for ATS. This stems from the fact that many AL strategies rely on uncertainty estimation, while as we show in our work, uncertain instances are usually noisy, and selecting them can degrade the model performance compared to passive annotation. We address this problem by proposing the first effective query strategy for AL in ATS based on diversity principles. We show that given a certain annotation budget, using our strategy in AL annotation helps to improve the model performance in terms of ROUGE and consistency scores. Additionally, we analyze the effect of self-learning and show that it can additionally increase the performance of the model.