Framing the Language: Fine-Tuning Gemma 3 for Manipulation Detection
Mykola Khandoga, Yevhen Kostiuk, Anton Polishko, Kostiantyn Kozlov, Yurii Filipchuk, Artur Kiulian
Abstract
In this paper, we present our solutions for the two UNLP 2025 shared tasks: manipulation span detection and manipulation technique classification in Ukraine-related media content sourced from Telegram channels. We experimented with fine-tuning large language models (LLMs) with up to 12 billion parameters, including both encoder- and decoder-based architectures. Our experiments identified Gemma 3 12b with a custom classification head as the best-performing model for both tasks. To address the limited size of the original training dataset, we generated 50k synthetic samples and marked up an additional 400k media entries containing manipulative content.- Anthology ID:
- 2025.unlp-1.6
- Volume:
- Proceedings of the Fourth Ukrainian Natural Language Processing Workshop (UNLP 2025)
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria (online)
- Editor:
- Mariana Romanyshyn
- Venues:
- UNLP | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 49–54
- Language:
- URL:
- https://preview.aclanthology.org/acl25-workshop-ingestion/2025.unlp-1.6/
- DOI:
- Cite (ACL):
- Mykola Khandoga, Yevhen Kostiuk, Anton Polishko, Kostiantyn Kozlov, Yurii Filipchuk, and Artur Kiulian. 2025. Framing the Language: Fine-Tuning Gemma 3 for Manipulation Detection. In Proceedings of the Fourth Ukrainian Natural Language Processing Workshop (UNLP 2025), pages 49–54, Vienna, Austria (online). Association for Computational Linguistics.
- Cite (Informal):
- Framing the Language: Fine-Tuning Gemma 3 for Manipulation Detection (Khandoga et al., UNLP 2025)
- PDF:
- https://preview.aclanthology.org/acl25-workshop-ingestion/2025.unlp-1.6.pdf