Yurii Filipchuk


2025

pdf bib
Framing the Language: Fine-Tuning Gemma 3 for Manipulation Detection
Mykola Khandoga | Yevhen Kostiuk | Anton Polishko | Kostiantyn Kozlov | Yurii Filipchuk | Artur Kiulian
Proceedings of the Fourth Ukrainian Natural Language Processing Workshop (UNLP 2025)

In this paper, we present our solutions for the two UNLP 2025 shared tasks: manipulation span detection and manipulation technique classification in Ukraine-related media content sourced from Telegram channels. We experimented with fine-tuning large language models (LLMs) with up to 12 billion parameters, including both encoder- and decoder-based architectures. Our experiments identified Gemma 3 12b with a custom classification head as the best-performing model for both tasks. To address the limited size of the original training dataset, we generated 50k synthetic samples and marked up an additional 400k media entries containing manipulative content.