Dmitry Kozlov


2025

pdf bib
GigaChat Family: Efficient Russian Language Modeling Through Mixture of Experts Architecture
Valentin Mamedov | Evgenii Kosarev | Gregory Leleytner | Ilya Shchuckin | Valeriy Berezovskiy | Daniil Smirnov | Dmitry Kozlov | Sergei Averkiev | Lukyanenko Ivan | Aleksandr Proshunin | Ainur Israfilova | Ivan Baskov | Artem Chervyakov | Emil Shakirov | Mikhail Kolesov | Daria Khomich | Daria Latortseva | Sergei Porkhun | Yury Fedorov | Oleg Kutuzov | Polina Kudriavtseva | Sofiia Soldatova | Kolodin Egor | Stanislav Pyatkin | Dzmitry Menshykh | Grafov Sergei IUrevich | Eldar Damirov | Vladimir Karlov | Ruslan Gaitukiev | Arkadiy Shatenov | Alena Fenogenova | Nikita Savushkin | Fedor Minkin
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)

Generative large language models (LLMs) have become crucial for modern NLP research and applications across various languages. However, the development of foundational models specifically tailored to the Russian language has been limited, primarily due to the significant computational resources required. This paper introduces the GigaChat family of Russian LLMs, available in various sizes, including base models and instruction-tuned versions. We provide a detailed report on the model architecture, pre-training process, and experiments to guide design choices. In addition, we evaluate their performance on Russian and English benchmarks and compare GigaChat with multilingual analogs. The paper presents a system demonstration of the top-performing models accessible via an API, a Telegram bot, and a Web interface. Furthermore, we have released three open GigaChat models in open-source, aiming to expand NLP research opportunities and support the development of industrial solutions for the Russian language.

pdf bib
Hypercomplex Transformer: Novel Attention Mechanism
Maxim Gordeev | Zuev Aleksandr | Mikhail Bakulin | Andrey Latyshev | Dmitry Kozlov | Yiwu Yao | Voronova Anastasia
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics

Self-attention mechanisms have become foundational across modern deep learning architectures. Recent efforts focus on improving their efficiency, particularly for signal processing tasks. The existing approaches employ complex-valued representations for inputs and weights and achieve higher accuracy at the cost of increased model size and inference latency. Dual-numbered algebra offers a promising alternative that allows efficient multiplication and faster inference with the same representational capacity. Inspired by previous studies in the field of hypercomplex neural networks, we introduce a generalized hypercomplex attention block and integrate it into Transformer-based models for EEG classification. Our experiments include adaptation of the hypercomplex models, so that the number of parameters is equal to that of their real-valued counterparts. Across all scenarios, the dual- and complex-numbered models consistently outperform the real ones, demonstrating superior accuracy. This work presents hypercomplex attention as a competitive and computationally efficient strategy with potential value to solve multiple NLP tasks.