Yevhen Kostiuk


2025

pdf bib
Automating Alternative Generation in Decision-Making
Yevhen Kostiuk | Clara Seyfried | Chris Reed
Findings of the Association for Computational Linguistics: EMNLP 2025

In decision making, generating alternative solutions is crucial for solving a problem. However, cognitive biases can impede this process by constraining individual decision makers’ creativity. To address this issue, we introduce a new task for automatically generating alternatives, inspired by the process of human “brainstorming”. We define alternative options based on atomic action components and present a dataset of 106 annotated Reddit r/Advice posts containing unique alternative options extracted from users’ replies. We also introduce new metrics to assess the quality of generated components, including distinctiveness, creativity, upvote-weighted, crowd intersection, and final commit intersection scores. As a baseline, we evaluated the large language models (LLMs) LLaMa3:8b, LLaMa3.1:8b, and Gemma 2:9b on the alternative component generation task. On the one hand, models demonstrated high creativity (ability to generate options beyond what Reddit users suggested) and performed well at proposing distinct alternatives. A subset of generated components was manually evaluated and found overall useful. This indicates that LLMs might be used to extend lists of alternative options, helping decision makers consider a problem from different perspectives. On the other hand, LLMs’ outputs often failed to align with human suggestions, implying that they still tend to miss important components.

pdf bib
Towards Multilingual LLM Evaluation for Baltic and Nordic languages: A study on Lithuanian History
Yevhen Kostiuk | Oxana Vitman | Łukasz Gągała | Artur Kiulian
Proceedings of the 1st Workshop on Nordic-Baltic Responsible Evaluation and Alignment of Language Models (NB-REAL 2025)

pdf bib
Evaluating LLM Judgment on Latvian and Lithuanian Short Answer Matching
Yevhen Kostiuk | Oxana Vitman | Łukasz Gągała | Artur Kiulian
Proceedings of the 1st Workshop on Nordic-Baltic Responsible Evaluation and Alignment of Language Models (NB-REAL 2025)

pdf bib
From English-Centric to Effective Bilingual: LLMs with Custom Tokenizers for Underrepresented Languages
Artur Kiulian | Anton Polishko | Mykola Khandoga | Yevhen Kostiuk | Guillermo Gabrielli | Łukasz Gagała | Fadi Zaraket | Qusai Abu Obaida | Hrishikesh Garud | Wendy Wing Yee Mak | Dmytro Chaplynskyi | Selma Amor | Grigol Peradze
Proceedings of the Fourth Ukrainian Natural Language Processing Workshop (UNLP 2025)

In this paper, we propose a model-agnostic cost-effective approach to developing bilingual base large language models (LLMs) to support English and any target language. The method includes vocabulary expansion, initialization of new embeddings, model training and evaluation. We performed our experiments with three languages, each using a non-Latin script—Ukrainian, Arabic, and Georgian.Our approach demonstrates improved language performance while reducing computational costs. It mitigates the disproportionate penalization of underrepresented languages, promoting fairness and minimizing adverse phenomena such as code-switching and broken grammar. Additionally, we introduce new metrics to evaluate language quality, revealing that vocabulary size significantly impacts the quality of generated text.

pdf bib
Framing the Language: Fine-Tuning Gemma 3 for Manipulation Detection
Mykola Khandoga | Yevhen Kostiuk | Anton Polishko | Kostiantyn Kozlov | Yurii Filipchuk | Artur Kiulian
Proceedings of the Fourth Ukrainian Natural Language Processing Workshop (UNLP 2025)

In this paper, we present our solutions for the two UNLP 2025 shared tasks: manipulation span detection and manipulation technique classification in Ukraine-related media content sourced from Telegram channels. We experimented with fine-tuning large language models (LLMs) with up to 12 billion parameters, including both encoder- and decoder-based architectures. Our experiments identified Gemma 3 12b with a custom classification head as the best-performing model for both tasks. To address the limited size of the original training dataset, we generated 50k synthetic samples and marked up an additional 400k media entries containing manipulative content.