Franco Sansonetti


Fixing paper assignments

  1. Please select all papers that belong to the same person.
  2. Indicate below which author they should be assigned to.
Provide a valid ORCID iD here. This will be used to match future papers to this author.
Provide the name of the school or the university where the author has received or will receive their highest degree (e.g., Ph.D. institution for researchers, or current affiliation for students). This will be used to form the new author page ID, if needed.

TODO: "submit" and "cancel" buttons here


2025

pdf bib
PeRAG: Multi-Modal Perspective-Oriented Verbalization with RAG for Inclusive Decision Making
Muhammad Saad Amin | Horacio Jesús Jarquín-Vásquez | Franco Sansonetti | Simona Lo Giudice | Valerio Basile | Viviana Patti
Proceedings of the Eleventh Italian Conference on Computational Linguistics (CLiC-it 2025)

pdf bib
PERSEVAL: A Framework for Perspectivist Classification Evaluation
Soda Marem Lo | Silvia Casola | Erhan Sezerer | Valerio Basile | Franco Sansonetti | Antonio Uva | Davide Bernardi
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Data perspectivism goes beyond majority vote label aggregation by recognizing various perspectives as legitimate ground truths.However, current evaluation practices remain fragmented, making it difficult to compare perspectivist approaches and analyze their impact on different users and demographic subgroups. To address this gap, we introduce PersEval, the first unified framework for evaluating perspectivist models in NLP. A key innovation is its evaluation at the individual annotator level and its treatment of annotators and users as distinct entities, consistently with real-world scenarios. We demonstrate PersEval’s capabilities through experiments with both Encoder-based and Decoder-based approaches, as well as an analysis of the effect of sociodemographic prompting. By considering global, text-, trait- and user-level evaluation metrics, we show that PersEval is a powerful tool for examining how models are influenced by user-specific information and identifying the biases this information may introduce.