Emilia Wiśnios

Also published as: Emilia Wisnios


2025

pdf bib
Wait, that’s not an option: LLMs Robustness with Incorrect Multiple-Choice Options
Gracjan Góral | Emilia Wiśnios | Piotr Sankowski | Paweł Budzianowski
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

This work introduces a novel framework for evaluating LLMs’ capacity to balance instruction-following with critical reasoning when presented with multiple-choice questions containing no valid answers. Through systematic evaluation across arithmetic, domain-specific knowledge, and high-stakes medical decision tasks, we demonstrate that post-training aligned models often default to selecting invalid options, while base models exhibit improved refusal capabilities that scale with model size. Our analysis reveals that alignment techniques, though intended to enhance helpfulness, can inadvertently impair models’ reflective judgment–the ability to override default behaviors when faced with invalid options. We additionally conduct a parallel human study showing similar instruction-following biases, with implications for how these biases may propagate through human feedback datasets used in alignment. We provide extensive ablation studies examining the impact of model size, training techniques, and prompt engineering. Our findings highlight fundamental tensions between alignment optimization and preservation of critical reasoning capabilities, with important implications for developing more robust AI systems for real-world deployment.

pdf bib
Behind Closed Words: Creating and Investigating the forePLay Annotated Dataset for Polish Erotic Discourse
Anna Kołos | Katarzyna Lorenc | Emilia Wiśnios | Agnieszka Karlińska
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

The surge in online content has created an urgent demand for robust detection systems, especially in non-English contexts where current tools demonstrate significant limitations. We introduce forePLay, a novel Polish-language dataset for erotic content detection, comprising over 24,000 annotated sentences. The dataset features a multidimensional taxonomy that captures ambiguity, violence, and socially unacceptable behaviors. Our comprehensive evaluation demonstrates that specialized Polish language models achieve superior performance compared to multilingual alternatives, with transformer-based architectures showing particular strength in handling imbalanced categories. The dataset and accompanying analysis establish essential frameworks for developing linguistically-aware content moderation systems, while highlighting critical considerations for extending such capabilities to morphologically complex languages.

2024

pdf bib
BAN-PL: A Polish Dataset of Banned Harmful and Offensive Content from Wykop.pl Web Service
Anna Kolos | Inez Okulska | Kinga Głąbińska | Agnieszka Karlinska | Emilia Wisnios | Paweł Ellerik | Andrzej Prałat
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Since the Internet is flooded with hate, it is one of the main tasks for NLP experts to master automated online content moderation. However, advancements in this field require improved access to publicly available accurate and non-synthetic datasets of social media content. For the Polish language, such resources are very limited. In this paper, we address this gap by presenting a new open dataset of offensive social media content for the Polish language. The dataset comprises content from Wykop.pl, a popular online service often referred to as the Polish Reddit, reported by users and banned in the internal moderation process. It contains a total of 691,662 posts and comments, evenly divided into two categories: harmful and neutral (non-harmful). The anonymized subset of the BAN-PL dataset consisting on 24,000 pieces (12,000 for each class), along with preprocessing scripts have been made publicly available. Furthermore the paper offers valuable insights into real-life content moderation processes and delves into an analysis of linguistic features and content characteristics of the dataset. Moreover, a comprehensive anonymization procedure has been meticulously described and applied. The prevalent biases encountered in similar datasets, including post-moderation and pre-selection biases, are also discussed.

2023

pdf bib
Deep Dive into the Language of International Relations: NLP-based Analysis of UNESCO’s Summary Records
Joanna Wojciechowska | Mateusz Odrowaz-Sypniewski | Maria W. Smigielska | Igor Kaminski | Emilia Wiśnios | Bartosz Pieliński | Hanna Schreiber
Proceedings of the 3rd Workshop on Computational Linguistics for the Political and Social Sciences

pdf bib
Towards Harmful Erotic Content Detection through Coreference-Driven Contextual Analysis
Inez Okulska | Emilia Wisnios
Proceedings of the Sixth Workshop on Computational Models of Reference, Anaphora and Coreference (CRAC 2023)