Justyna Gromada


2025

pdf bib
Neural Models and Language Model Prompting for the Multidimensional Evaluation of Open-Ended Conversations
Michelle Elizabeth | Alicja Kasicka | Natalia Krawczyk | Magalie Ochs | Gwénolé Lecorvé | Justyna Gromada | Lina M. Rojas-Barahona
Proceedings of the Twelfth Dialog System Technology Challenge

The growing number of generative AI-based dialogue systems has made their evaluation a crucial challenge. This paper presents our contribution to this important problem through the Dialogue System Technology Challenge (DSTC-12, Track 1), where we developed models to predict dialogue-level, dimension-specific scores. Given the constraint of using relatively small models (i.e. fewer than 13 billion parameters) our work follows two main strategies: employing Language Models (LMs) as evaluators through prompting, and training encoder-based classification and regression models.Our results show that while LM prompting achieves only modest correlations with human judgments, it still ranks second on the test set, outperformed only by the baseline.The regression and classification models, with significantly fewer parameters, demonstrate high correlation for some dimensions on the validation set. Although their performance decreases on the test set, it is important to note that the test set contains annotations with significantly different score ranges for some of the dimensions with respect to the train and validation sets.

pdf bib
Evaluating Conversational Agents with Persona-driven User Simulations based on Large Language Models: A Sales Bot Case Study
Justyna Gromada | Alicja Kasicka | Ewa Komkowska | Lukasz Krajewski | Natalia Krawczyk | Morgan Veyret | Bartosz Przybył | Lina M. Rojas-Barahona | Michał K. Szczerbak
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track

We present a novel approach to conversational agent evaluation using Persona-driven User Simulations based on Large Language Models (LLMs). Our methodology first uses LLMs to generate diverse customer personas, which are then used to configure a single LLM-based user simulator. This simulator evaluates SalesBot 2.0, a proactive conversational sales agent. We introduce a dataset of these personas, along with corresponding goals and conversation scenarios, enabling comprehensive testing across different customer types with varying assertiveness levels and precision of needs. Our evaluation framework assesses both the simulator’s adherence to persona instructions and the bot’s performance across multiple dimensions, combining human annotation with LLM-as-a-judge assessments using commercial and open-source models. Results demonstrate that our LLM-based simulator effectively emulates nuanced customer roles, and that cross-selling strategies can be implemented with minimal impact on customer satisfaction, varying by customer type.

2021

pdf bib
SpanAlign: Efficient Sequence Tagging Annotation Projection into Translated Data applied to Cross-Lingual Opinion Mining
Léo Jacqmin | Gabriel Marzinotto | Justyna Gromada | Ewelina Szczekocka | Robert Kołodyński | Géraldine Damnati
Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)

Following the increasing performance of neural machine translation systems, the paradigm of using automatically translated data for cross-lingual adaptation is now studied in several applicative domains. The capacity to accurately project annotations remains however an issue for sequence tagging tasks where annotation must be projected with correct spans. Additionally, when the task implies noisy user-generated text, the quality of translation and annotation projection can be affected. In this paper we propose to tackle multilingual sequence tagging with a new span alignment method and apply it to opinion target extraction from customer reviews. We show that provided suitable heuristics, translated data with automatic span-level annotation projection can yield improvements both for cross-lingual adaptation compared to zero-shot transfer, and data augmentation compared to a multilingual baseline.