Szymon Łukasik
2025
PL-Guard: Benchmarking Language Model Safety for Polish
Aleksandra Krasnodebska
|
Karolina Seweryn
|
Szymon Łukasik
|
Wojciech Kusa
Proceedings of the 10th Workshop on Slavic Natural Language Processing (Slavic NLP 2025)
We present a benchmark dataset for evaluating language model safety in Polish, addressing the underrepresentation of medium-resource languages in existing safety assessments. Our dataset includes both original and adversarially perturbed examples. We fine-tune and evaluate multiple models—LlamaGuard-3-8B, a HerBERT-based classifier, and PLLuM—and find that the HerBERT-based model outperforms others, especially under adversarial conditions.
2024
LLM generated responses to mitigate the impact of hate speech
Jakub Podolak
|
Szymon Łukasik
|
Paweł Balawender
|
Jan Ossowski
|
Jan Piotrowski
|
Katarzyna Bakowicz
|
Piotr Sankowski
Findings of the Association for Computational Linguistics: EMNLP 2024
In this study, we explore the use of Large Language Models (LLMs) to counteract hate speech. We conducted the first real-life A/B test assessing the effectiveness of LLM-generated counter-speech. During the experiment, we posted 753 automatically generated responses aimed at reducing user engagement under tweets that contained hate speech toward Ukrainian refugees in Poland.Our work shows that interventions with LLM-generated responses significantly decrease user engagement, particularly for original tweets with at least ten views, reducing it by over 20%. This paper outlines the design of our automatic moderation system, proposes a simple metric for measuring user engagement and details the methodology of conducting such an experiment. We discuss the ethical considerations and challenges in deploying generative AI for discourse moderation.
Search
Fix author
Co-authors
- Katarzyna Bakowicz 1
- Paweł Balawender 1
- Aleksandra Krasnodębska 1
- Wojciech Kusa 1
- Jan Ossowski 1
- show all...