Yurii Paniv
2026
BabyBabelLM: A Multilingual Benchmark of Developmentally Plausible Training Data
Jaap Jumelet | Abdellah Fourtassi | Akari Haga | Bastian Bunzeck | Bhargav Shandilya | Diana Galvan-Sosa | Faiz Ghifari Haznitrama | Francesca Padovani | Francois Meyer | Hai Hu | Julen Etxaniz | Laurent Prevot | Linyang He | María Grandury | Mila Marcheva | Negar Foroutan | Nikitas Theodoropoulos | Pouya Sadeghi | Siyuan Song | Suchir Salhan | Susana Zhou | Yurii Paniv | Ziyin Zhang | Arianna Bisazza | Alex Warstadt | Leshem Choshen
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Jaap Jumelet | Abdellah Fourtassi | Akari Haga | Bastian Bunzeck | Bhargav Shandilya | Diana Galvan-Sosa | Faiz Ghifari Haznitrama | Francesca Padovani | Francois Meyer | Hai Hu | Julen Etxaniz | Laurent Prevot | Linyang He | María Grandury | Mila Marcheva | Negar Foroutan | Nikitas Theodoropoulos | Pouya Sadeghi | Siyuan Song | Suchir Salhan | Susana Zhou | Yurii Paniv | Ziyin Zhang | Arianna Bisazza | Alex Warstadt | Leshem Choshen
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
We present BabyBabelLM, a multilingual collection of datasets modeling the language a person observes from birth until they acquire a native language. We curate developmentally plausible pretraining data aiming to cover the equivalent of 100M English words of content in each of 45 languages. We compile evaluation suites and train baseline models in each language. BabyBabelLM aims to facilitate multilingual pretraining and cognitive modeling.
Bridging Applied Experience and Research Contexts in Ukrainian NLP Education
Yurii Paniv | Viktoriia Makovska
Proceedings of the Seventh Workshop on Teaching Natural Language Processing (TeachNLP 2026)
Yurii Paniv | Viktoriia Makovska
Proceedings of the Seventh Workshop on Teaching Natural Language Processing (TeachNLP 2026)
We present an open, bachelor-level Natural Language Processing (NLP) course developed at Ukrainian Catholic University and delivered in Ukrainian. The course addresses several challenges in NLP education: adapting predominantly English-centric materials to a different linguistic and cultural context, supporting students with heterogeneous technical backgrounds, and balancing foundational theory with industry-relevant skills. All course materials, including lecture slides, notebooks, video recordings, and assignments, are publicly available. We describe our pedagogical design choices, focusing on culturally adapted tasks, integrated ethics, project-based assessment, and continuous student feedback. Our experience demonstrates that it is feasible to build a comprehensive and modern NLP curriculum from scratch in a non-English context, even when instructors come primarily from industry backgrounds.
2025
Isolating LLM Performance Gains in Pre-training versus Instruction-tuning for Mid-resource Languages: The Ukrainian Benchmark Study
Yurii Paniv
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
Yurii Paniv
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
This paper evaluates language model performance on Ukrainian language tasks across multiple downstream benchmarks, including summarization, closed and open question answering, and translation at both sentence and paragraph levels. We also introduce LongFlores, an extension of the FLORES benchmark designed specifically to assess paragraph-level translation capabilities. In our experiments, we compare the performance of base models against their instruction-tuned counterparts to isolate and quantify the source of performance improvements for Ukrainian language tasks. Our findings reveal that for popular open source models, base models are stronger in the few-shot setting for the task than their instruction-tuned counterparts in the zero-shot setting. This suggests lower attention paid to Ukrainian during the instruction-tuning phase, providing valuable insights for future model development and optimization for Ukrainian and potentially other lower-resourced languages.
Benchmarking Multimodal Models for Ukrainian Language Understanding Across Academic and Cultural Domains
Yurii Paniv | Artur Kiulian | Dmytro Chaplynskyi | Mykola Khandoga | Anton Polishko | Tetiana Bas | Guillermo Gabrielli
Proceedings of the Fourth Ukrainian Natural Language Processing Workshop (UNLP 2025)
Yurii Paniv | Artur Kiulian | Dmytro Chaplynskyi | Mykola Khandoga | Anton Polishko | Tetiana Bas | Guillermo Gabrielli
Proceedings of the Fourth Ukrainian Natural Language Processing Workshop (UNLP 2025)
While the evaluation of multimodal English-centric models is an active area of research with numerous benchmarks, there is a profound lack of benchmarks or evaluation suites for low- and mid-resource languages. We introduce ZNO-Vision, a comprehensive multimodal Ukrainian-centric benchmark derived from the standardized university entrance examination (ZNO). The benchmark consists of over 4300 expert-crafted questions spanning 12 academic disciplines, including mathematics, physics, chemistry, and humanities. We evaluated the performance of both open-source models and API providers, finding that only a handful of models performed above baseline. Alongside the new benchmark, we performed the first evaluation study of multimodal text generation for the Ukrainian language: we measured caption generation quality on the Multi30K-UK dataset. Lastly, we tested a few models from a cultural perspective on knowledge of national cuisine. We believe our work will advance multimodal generation capabilities for the Ukrainian language and our approach could be useful for other low-resource languages.
UAlign: LLM Alignment Benchmark for the Ukrainian Language
Andrian Kravchenko | Yurii Paniv | Nazarii Drushchak
Proceedings of the Fourth Ukrainian Natural Language Processing Workshop (UNLP 2025)
Andrian Kravchenko | Yurii Paniv | Nazarii Drushchak
Proceedings of the Fourth Ukrainian Natural Language Processing Workshop (UNLP 2025)
This paper introduces UAlign, the comprehensive benchmark for evaluating the alignment of Large Language Models (LLMs) in the Ukrainian language. The benchmark consists of two complementary components: a moral judgment dataset with 3,682 scenarios of varying ethical complexities and a dataset with 1,700 ethical situations presenting clear normative distinctions. Each element provides parallel English-Ukrainian text pairs, enabling cross-lingual comparison. Unlike existing resources predominantly developed for high-resource languages, our benchmark addresses the critical need for evaluation resources in Ukrainian. The development process involved machine translation and linguistic validation using Ukrainian language models for grammatical error correction. Our cross-lingual evaluation of six LLMs confirmed the existence of a performance gap between alignment in Ukrainian and English while simultaneously providing valuable insights regarding the overall alignment capabilities of these models. The benchmark has been made publicly available to facilitate further research initiatives and enhance commercial applications.Warning: The datasets introduced in this paper contain sensitive materials related to ethical and moral scenarios that may include offensive, harmful, illegal, or controversial content.
Context-Aware Lexical Stress Prediction and Phonemization for Ukrainian TTS Systems
Anastasiia Senyk | Mykhailo Lukianchuk | Valentyna Robeiko | Yurii Paniv
Proceedings of the Fourth Ukrainian Natural Language Processing Workshop (UNLP 2025)
Anastasiia Senyk | Mykhailo Lukianchuk | Valentyna Robeiko | Yurii Paniv
Proceedings of the Fourth Ukrainian Natural Language Processing Workshop (UNLP 2025)
Text preprocessing is a fundamental component of high-quality speech synthesis. This work presents a novel rule-based phonemizer combined with a sentence-level lexical stress prediction model to improve phonetic accuracy and prosody prediction in the text-to-speech pipelines. We also introduce a new benchmark dataset with annotated stress patterns designed for evaluating lexical stress prediction systems at the sentence level.Experimental results demonstrate that the proposed phonemizer achieves a 1.23% word error rate on a manually constructed pronunciation dataset, while the lexical stress prediction pipeline shows results close to dictionary-based methods, outperforming existing neural network solutions.
2024
Setting up the Data Printer with Improved English to Ukrainian Machine Translation
Yurii Paniv | Dmytro Chaplynskyi | Nikita Trynus | Volodymyr Kyrylov
Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024
Yurii Paniv | Dmytro Chaplynskyi | Nikita Trynus | Volodymyr Kyrylov
Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024
To build large language models for Ukrainian we need to expand our corpora with large amounts of new algorithmic tasks expressed in natural language. Examples of task performance expressed in English are abundant, so with a high-quality translation system our community will be enabled to curate datasets faster. To aid this goal, we introduce a recipe to build a translation system using supervised finetuning of a large pretrained language model with a noisy parallel dataset of 3M pairs of Ukrainian and English sentences followed by a second phase of training using 17K examples selected by k-fold perplexity filtering on another dataset of higher quality. Our decoder-only model named Dragoman beats performance of previous state of the art encoder-decoder models on the FLORES devtest set.
Search
Fix author
Co-authors
- Dmytro Chaplynskyi 2
- Tetiana Bas 1
- Arianna Bisazza 1
- Bastian Bunzeck 1
- Leshem Choshen 1
- Nazarii Drushchak 1
- Julen Etxaniz 1
- Negar Foroutan 1
- Abdellah Fourtassi 1
- Guillermo Gabrielli 1
- Diana Galván-Sosa 1
- María Grandury 1
- Akari Haga 1
- Faiz Ghifari Haznitrama 1
- Linyang He 1
- Hai Hu 1
- Jaap Jumelet 1
- Mykola Khandoga 1
- Artur Kiulian 1
- Andrian Kravchenko 1
- Volodymyr Kyrylov 1
- Mykhailo Lukianchuk 1
- Viktoriia Makovska 1
- Mila Marcheva 1
- Francois Meyer 1
- Francesca Padovani 1
- Anton Polishko 1
- Laurent Prévot 1
- Valentyna Robeiko 1
- Pouya Sadeghi 1
- Suchir Salhan 1
- Anastasiia Senyk 1
- Bhargav Shandilya 1
- Siyuan Song 1
- Nikitas Theodoropoulos 1
- Nikita Trynus 1
- Alex Warstadt 1
- Ziyin Zhang 1
- Susana Zhou 1