Bartosz Żuk
2026
Rethinking the Evaluation of Alignment Methods: Insights into Diversity, Generalisation, and Safety
Denis Janiak | Julia Moska | Dawid Motyka | Karolina Seweryn | Paweł Walkowiak | Bartosz Żuk | Arkadiusz Janz
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Denis Janiak | Julia Moska | Dawid Motyka | Karolina Seweryn | Paweł Walkowiak | Bartosz Żuk | Arkadiusz Janz
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Large language models (LLMs) require careful alignment to balance competing objectives: factuality, safety, conciseness, proactivity, and diversity. Existing studies focus on individual techniques or specific dimensions, lacking a holistic assessment of the inherent trade-offs. We propose a unified evaluation framework that compares LLM alignment methods (PPO, DPO, ORPO, KTO) across these five axes, using both in-distribution and out-of-distribution datasets. Leveraging a specialized LLM-as-Judge prompt, validated through human studies, we reveal that DPO and KTO excel in factual accuracy, PPO and DPO lead in safety, and PPO best balances conciseness with proactivity. Our findings provide insights into trade-offs of common alignment methods, guiding the development of more balanced and reliable LLMs.
2025
Where Frameworks (Dis)agree: A Study of Discourse Segmentation
Maciej Ogrodniczuk | Anna Latusek | Karolina Saputa | Alina Wróblewska | Daniel Ziembicki | Bartosz Żuk | Martyna Lewandowska | Adam Okrasiński | Paulina Rosalska | Anna Śliwicka | Aleksandra Tomaszewska | Sebastian Żurowski
Proceedings of the 6th Workshop on Computational Approaches to Discourse, Context and Document-Level Inferences (CODI 2025)
Maciej Ogrodniczuk | Anna Latusek | Karolina Saputa | Alina Wróblewska | Daniel Ziembicki | Bartosz Żuk | Martyna Lewandowska | Adam Okrasiński | Paulina Rosalska | Anna Śliwicka | Aleksandra Tomaszewska | Sebastian Żurowski
Proceedings of the 6th Workshop on Computational Approaches to Discourse, Context and Document-Level Inferences (CODI 2025)
This study addresses the fundamental task of discourse unit detection – the critical initial step in discourse parsing. We analyze how various discourse frameworks conceptualize and structure discourse units, with a focus on their underlying taxonomies and theoretical assumptions. While approaches to discourse segmentation vary considerably, the extent to which these conceptual divergences influence practical implementations remains insufficiently studied. To address this gap, we investigate similarities and differences in segmentation across several English datasets, segmented and annotated according to distinct discourse frameworks, using a simple, rule-based heuristics. We evaluate the effectiveness of rules with respect to gold-standard segmentation, while also checking variability and cross-framework generalizability. Additionally, we conduct a manual comparison of a sample of rule-based segmentation outputs against benchmark segmentation, identifying points of convergence and divergence.Our findings indicate that discourse frameworks align strongly at the level of segmentation: particular clauses consistently serve as the primary boundaries of discourse units. Discrepancies arise mainly in the treatment of other structures, such as adpositional phrases, appositions, interjections, and parenthesised text segments, which are inconsistently marked as separate discourse units across formalisms.
PLLuM-Align: Polish Preference Dataset for Large Language Model Alignment
Karolina Seweryn | Anna Kołos | Agnieszka Karlińska | Katarzyna Lorenc | Katarzyna Dziewulska | Maciej Chrabaszcz | Aleksandra Krasnodebska | Paula Betscher | Zofia Cieślińska | Katarzyna Kowol | Julia Moska | Dawid Motyka | Paweł Walkowiak | Bartosz Żuk | Arkadiusz Janz
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Karolina Seweryn | Anna Kołos | Agnieszka Karlińska | Katarzyna Lorenc | Katarzyna Dziewulska | Maciej Chrabaszcz | Aleksandra Krasnodebska | Paula Betscher | Zofia Cieślińska | Katarzyna Kowol | Julia Moska | Dawid Motyka | Paweł Walkowiak | Bartosz Żuk | Arkadiusz Janz
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Alignment is the critical process of minimizing harmful outputs by teaching large language models (LLMs) to prefer safe, helpful and appropriate responses. While the majority of alignment research and datasets remain overwhelmingly English-centric, ensuring safety across diverse linguistic and cultural contexts requires localized resources. In this paper, we introduce the first Polish preference dataset PLLuM-Align, created entirely through human annotation to reflect Polish language and cultural nuances. The dataset includes response rating, ranking, and multi-turn dialog data. Designed to reflect the linguistic subtleties and cultural norms of Polish, this resource lays the groundwork for more aligned Polish LLMs and contributes to the broader goal of multilingual alignment in underrepresented languages.
Search
Fix author
Co-authors
- Arkadiusz Janz 2
- Julia Moska 2
- Dawid Motyka 2
- Karolina Seweryn 2
- Paweł Walkowiak 2
- Paula Betscher 1
- Maciej Chrabaszcz 1
- Zofia Cieślińska 1
- Katarzyna Dziewulska 1
- Denis Janiak 1
- Agnieszka Karlińska 1
- Katarzyna Kowol 1
- Anna Kołos 1
- Aleksandra Krasnodębska 1
- Anna Latusek 1
- Martyna Lewandowska 1
- Katarzyna Lorenc 1
- Maciej Ogrodniczuk 1
- Adam Okrasiński 1
- Paulina Rosalska 1
- Karolina Saputa 1
- Aleksandra Tomaszewska 1
- Alina Wróblewska 1
- Daniel Ziembicki 1
- Anna Śliwicka 1
- Sebastian Żurowski 1