2025
pdf
bib
Towards Multilingual LLM Evaluation for Baltic and Nordic languages: A study on Lithuanian History
Yevhen Kostiuk
|
Oxana Vitman
|
Łukasz Gągała
|
Artur Kiulian
Proceedings of the 1st Workshop on Nordic-Baltic Responsible Evaluation and Alignment of Language Models (NB-REAL 2025)
pdf
bib
Evaluating LLM Judgment on Latvian and Lithuanian Short Answer Matching
Yevhen Kostiuk
|
Oxana Vitman
|
Łukasz Gągała
|
Artur Kiulian
Proceedings of the 1st Workshop on Nordic-Baltic Responsible Evaluation and Alignment of Language Models (NB-REAL 2025)
pdf
bib
abs
From English-Centric to Effective Bilingual: LLMs with Custom Tokenizers for Underrepresented Languages
Artur Kiulian
|
Anton Polishko
|
Mykola Khandoga
|
Yevhen Kostiuk
|
Guillermo Gabrielli
|
Łukasz Gagała
|
Fadi Zaraket
|
Qusai Abu Obaida
|
Hrishikesh Garud
|
Wendy Wing Yee Mak
|
Dmytro Chaplynskyi
|
Selma Amor
|
Grigol Peradze
Proceedings of the Fourth Ukrainian Natural Language Processing Workshop (UNLP 2025)
In this paper, we propose a model-agnostic cost-effective approach to developing bilingual base large language models (LLMs) to support English and any target language. The method includes vocabulary expansion, initialization of new embeddings, model training and evaluation. We performed our experiments with three languages, each using a non-Latin script—Ukrainian, Arabic, and Georgian.Our approach demonstrates improved language performance while reducing computational costs. It mitigates the disproportionate penalization of underrepresented languages, promoting fairness and minimizing adverse phenomena such as code-switching and broken grammar. Additionally, we introduce new metrics to evaluate language quality, revealing that vocabulary size significantly impacts the quality of generated text.
pdf
bib
abs
Framing the Language: Fine-Tuning Gemma 3 for Manipulation Detection
Mykola Khandoga
|
Yevhen Kostiuk
|
Anton Polishko
|
Kostiantyn Kozlov
|
Yurii Filipchuk
|
Artur Kiulian
Proceedings of the Fourth Ukrainian Natural Language Processing Workshop (UNLP 2025)
In this paper, we present our solutions for the two UNLP 2025 shared tasks: manipulation span detection and manipulation technique classification in Ukraine-related media content sourced from Telegram channels. We experimented with fine-tuning large language models (LLMs) with up to 12 billion parameters, including both encoder- and decoder-based architectures. Our experiments identified Gemma 3 12b with a custom classification head as the best-performing model for both tasks. To address the limited size of the original training dataset, we generated 50k synthetic samples and marked up an additional 400k media entries containing manipulative content.