Alexander Kapitanov
2026
Multimodal Evaluation of Russian-language Architectures
Artem Chervyakov | Ulyana Isaeva | Anton Emelyanov | Artem Safin | Maria Tikhonova | Alexander Kharitonov | Yulia Lyakh | Petr Surovtsev | Denis Shevelev | Vildan Saburov | Vasily Konovalov | Elisei Rykov | Ivan Sviridov | Amina Miftakhova | Ilseyar Alimova | Alexander Panchenko | Alexander Kapitanov | Alena Fenogenova
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Artem Chervyakov | Ulyana Isaeva | Anton Emelyanov | Artem Safin | Maria Tikhonova | Alexander Kharitonov | Yulia Lyakh | Petr Surovtsev | Denis Shevelev | Vildan Saburov | Vasily Konovalov | Elisei Rykov | Ivan Sviridov | Amina Miftakhova | Ilseyar Alimova | Alexander Panchenko | Alexander Kapitanov | Alena Fenogenova
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Multimodal large language models (MLLMs) are currently at the center of research attention, showing rapid progress in scale and capabilities, yet their intelligence, limitations, and risks remain insufficiently understood. To address these issues, particularly in the context of the Russian language, where no multimodal benchmarks currently exist, we introduce MERA Multi, an open multimodal evaluation framework for Russian-spoken architectures. The benchmark is instruction-based and encompasses default text, image, audio, and video modalities, comprising 18 newly constructed evaluation tasks for both general-purpose models and modality-specific architectures (image-to-text, video-to-text, and audio-to-text). Our contributions include: (i) a universal taxonomy of multimodal abilities; (ii) 18 datasets created entirely from scratch with attention to Russian cultural and linguistic specificity, unified prompts, and metrics; (iii) baseline results for both closed-source and open-source models; (iv) a methodology for preventing benchmark leakage, including watermarking for private sets. While our current focus is on Russian, the proposed benchmark provides a replicable methodology for constructing multimodal benchmarks in typologically diverse languages, particularly within the Slavic language family.
2025
Logos as a Well-Tempered Pre-train for Sign Language Recognition
Ilya Ovodov | Petr Surovtsev | Karina Kvanchiani | Alexander Kapitanov | Alexander Nagaev
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Ilya Ovodov | Petr Surovtsev | Karina Kvanchiani | Alexander Kapitanov | Alexander Nagaev
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
This paper examines two aspects of the isolated sign language recognition (ISLR) task. First, although a certain number of datasets is available, the data for individual sign languages is limited. It poses the challenge of cross-language ISLR model training, including transfer learning. Second, similar signs can have different semantic meanings. It leads to ambiguity in dataset labeling and raises the question of the best policy for annotating such signs. To address these issues, this study presents Logos, a novel Russian Sign Language (RSL) dataset, the most extensive available ISLR dataset by the number of signers, one of the most extensive datasets in size and vocabulary, and the largest RSL dataset. It is shown that a model, pre-trained on the Logos dataset can be used as a universal encoder for other language SLR tasks, including few-shot learning. We explore cross-language transfer learning approaches and find that joint training using multiple classification heads benefits accuracy for the target low-resource datasets the most. The key feature of the Logos dataset is explicitly annotated visually similar sign groups. We show that explicitly labeling visually similar signs improves trained model quality as a visual encoder for downstream tasks. Based on the proposed contributions, we outperform current state-of-the-art results for the WLASL dataset and get competitive results for the AUTSL dataset, with a single stream model processing solely RGB video. The source code, dataset, and pre-trained models are publicly available.
RusCode: Russian Cultural Code Benchmark for Text-to-Image Generation
Viacheslav Vasilev | Julia Agafonova | Nikolai Gerasimenko | Alexander Kapitanov | Polina Mikhailova | Evelina Mironova | Denis Dimitrov
Findings of the Association for Computational Linguistics: NAACL 2025
Viacheslav Vasilev | Julia Agafonova | Nikolai Gerasimenko | Alexander Kapitanov | Polina Mikhailova | Evelina Mironova | Denis Dimitrov
Findings of the Association for Computational Linguistics: NAACL 2025
Text-to-image generation models have gained popularity among users around the world. However, many of these models exhibit a strong bias toward English-speaking cultures, ignoring or misrepresenting the unique characteristics of other language groups, countries, and nationalities. The lack of cultural awareness can reduce the generation quality and lead to undesirable consequences such as unintentional insult, and the spread of prejudice. In contrast to the field of natural language processing, cultural awareness in computer vision has not been explored as extensively. In this paper, we strive to reduce this gap. We propose a RusCode benchmark for evaluating the quality of text-to-image generation containing elements of the Russian cultural code. To do this, we form a list of 19 categories that best represent the features of Russian visual culture. Our final dataset consists of 1250 text prompts in Russian and their translations into English. The prompts cover a wide range of topics, including complex concepts from art, popular culture, folk traditions, famous people’s names, natural objects, scientific achievements, etc. We present the results of a human evaluation of the side-by-side comparison of Russian visual concepts representations using popular generative models.
Search
Fix author
Co-authors
- Petr Surovtsev 2
- Julia Agafonova 1
- Ilseyar Alimova 1
- Artem Chervyakov 1
- Denis Dimitrov 1
- Anton Emelyanov 1
- Alena Fenogenova 1
- Nikolai Gerasimenko 1
- Ulyana Isaeva 1
- Alexander Kharitonov 1
- Vasily Konovalov 1
- Karina Kvanchiani 1
- Yulia Lyakh 1
- Amina Miftakhova 1
- Polina Mikhailova 1
- Evelina Mironova 1
- Alexander Nagaev 1
- Ilya Ovodov 1
- Alexander Panchenko 1
- Elisei Rykov 1
- Vildan Saburov 1
- Artem Safin 1
- Denis Shevelev 1
- Ivan Sviridov 1
- Maria Tikhonova 1
- Viacheslav Vasilev 1