2025
pdf
bib
abs
La Leaderboard: A Large Language Model Leaderboard for Spanish Varieties and Languages of Spain and Latin America
María Grandury
|
Javier Aula-Blasco
|
Júlia Falcão
|
Clémentine Fourrier
|
Miguel González Saiz
|
Gonzalo Martínez
|
Gonzalo Santamaria Gomez
|
Rodrigo Agerri
|
Nuria Aldama García
|
Luis Chiruzzo
|
Javier Conde
|
Helena Gomez Adorno
|
Marta Guerrero Nieto
|
Guido Ivetta
|
Natàlia López Fuertes
|
Flor Miriam Plaza-del-Arco
|
María-Teresa Martín-Valdivia
|
Helena Montoro Zamorano
|
Carmen Muñoz Sanz
|
Pedro Reviriego
|
Leire Rosado Plaza
|
Alejandro Vaca Serrano
|
Estrella Vallecillo-Rodríguez
|
Jorge Vallego
|
Irune Zubiaga
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Leaderboards showcase the current capabilities and limitations of Large Language Models (LLMs). To motivate the development of LLMs that represent the linguistic and cultural diversity of the Spanish-speaking community, we present La Leaderboard, the first open-source leaderboard to evaluate generative LLMs in languages and language varieties of Spain and Latin America. La Leaderboard is a community-driven project that aims to establish an evaluation standard for everyone interested in developing LLMs for the Spanish-speaking community. This initial version combines 66 datasets in Catalan, Basque, Galician, and different Spanish varieties, showcasing the evaluation results of 50 models. To encourage community-driven development of leaderboards in other languages, we explain our methodology, including guidance on selecting the most suitable evaluation setup for each downstream task. In particular, we provide a rationale for using fewer few-shot examples than typically found in the literature, aiming to reduce environmental impact and facilitate access to reproducible results for a broader research community.
pdf
bib
abs
Navigating Ethical Challenges in NLP: Hands-on strategies for students and researchers
Luciana Benotti
|
Fanny Ducel
|
Karën Fort
|
Guido Ivetta
|
Zhijing Jin
|
Min-Yen Kan
|
Seunghun J. Lee
|
Minzhi Li
|
Margot Mieskes
|
Adriana Pagano
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 5: Tutorial Abstracts)
With NLP research being rapidly productionized into real-world applications, it is important to be aware of and think through the consequences of our work. Such ethical considerations are important in both authoring and reviewing (e.g. privacy, consent, fairness, among others). This tutorial will equip participants with basic guidelines for thinking deeply about ethical issues and review common considerations that recur in NLP research. The methodology is interactive and participatory, including discussion of case studies and group work. Participants will gain practical experience on when to flag a paper for ethics review and how to write an ethical consideration section to be shared with the broader community. Most importantly, the participants will be co-creating the tutorial outcomes and extending tutorial materials to share as public outcomes.
pdf
bib
abs
HESEIA: A community-based dataset for evaluating social biases in large language models, co-designed in real school settings in Latin America
Guido Ivetta
|
Marcos J Gomez
|
Sofía Martinelli
|
Pietro Palombini
|
M Emilia Echeveste
|
Nair Carolina Mazzeo
|
Beatriz Busaniche
|
Luciana Benotti
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Most resources for evaluating social biases in Large Language Models are developed without co-design from the communities affected by these biases, and rarely involve participatory approaches. We introduce HESEIA, a dataset of 46,499 sentences created in a professional development course. The course involved 370 high-school teachers and 5,370 students from 189 Latin-American schools. Unlike existing benchmarks, HESEIA captures intersectional biases across multiple demographic axes and school subjects. It reflects local contexts through the lived experience and pedagogical expertise of educators. Teachers used minimal pairs to create sentences that express stereotypes relevant to their school subjects and communities. We show the dataset diversity in term of demographic axes represented and also in terms of the knowledge areas included. We demonstrate that the dataset contains more stereotypes unrecognized by current LLMs than previous datasets. HESEIA is available to support bias assessments grounded in educational communities.
pdf
bib
abs
Insights from a Disaggregated Analysis of Kinds of Biases in a Multicultural Dataset
Guido Ivetta
|
Hernán Maina
|
Luciana Benotti
Proceedings of the 9th Widening NLP Workshop
Warning: This paper contains explicit statements of offensive stereotypes which may be upsetting.Stereotypes vary across cultural contexts, making it essential to understand how language models encode social biases. MultiLingualCrowsPairs is a dataset of culturally adapted stereotypical and anti-stereotypical sentence pairs across nine languages. While prior work has primarily reported average fairness metrics on masked language models, this paper analyzes social biases in generative models by disaggregating results across specific bias types.We find that although most languages show an overall preference for stereotypical sentences, this masks substantial variation across different types of bias, such as gender, religion, and socioeconomic status. Our findings underscore that relying solely on aggregated metrics can obscure important patterns, and that fine-grained, bias-specific analysis is critical for meaningful fairness evaluation.
2024
pdf
bib
abs
Selectively Answering Visual Questions
Julian Eisenschlos
|
Hernán Maina
|
Guido Ivetta
|
Luciana Benotti
Findings of the Association for Computational Linguistics: ACL 2024
Recently, large multi-modal models (LMMs) have emerged with the capacity to perform vision tasks such as captioning and visual question answering (VQA) with unprecedented accuracy. Applications such as helping the blind or visually impaired have a critical need for precise answers. It is specially important for models to be well calibrated and be able to quantify their uncertainty in order to selectively decide when to answer and when to abstain or ask for clarifications. We perform the first in-depth analysis of calibration methods and metrics for VQA with in-context learning LMMs. Studying VQA on two answerability benchmarks, we show that the likelihood score of visually grounded models is better calibrated than in their text-only counterparts for in-context learning, where sampling based methods are generally superior, but no clear winner arises. We propose Avg BLEU, a calibration score combining the benefits of both sampling and likelihood methods across modalities.
pdf
bib
abs
Your Stereotypical Mileage May Vary: Practical Challenges of Evaluating Biases in Multiple Languages and Cultural Contexts
Karen Fort
|
Laura Alonso Alemany
|
Luciana Benotti
|
Julien Bezançon
|
Claudia Borg
|
Marthese Borg
|
Yongjian Chen
|
Fanny Ducel
|
Yoann Dupont
|
Guido Ivetta
|
Zhijian Li
|
Margot Mieskes
|
Marco Naguib
|
Yuyan Qian
|
Matteo Radaelli
|
Wolfgang S. Schmeisser-Nieto
|
Emma Raimundo Schulz
|
Thiziri Saci
|
Sarah Saidi
|
Javier Torroba Marchante
|
Shilin Xie
|
Sergio E. Zanotto
|
Aurélie Névéol
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Warning: This paper contains explicit statements of offensive stereotypes which may be upsetting The study of bias, fairness and social impact in Natural Language Processing (NLP) lacks resources in languages other than English. Our objective is to support the evaluation of bias in language models in a multilingual setting. We use stereotypes across nine types of biases to build a corpus containing contrasting sentence pairs, one sentence that presents a stereotype concerning an underadvantaged group and another minimally changed sentence, concerning a matching advantaged group. We build on the French CrowS-Pairs corpus and guidelines to provide translations of the existing material into seven additional languages. In total, we produce 11,139 new sentence pairs that cover stereotypes dealing with nine types of biases in seven cultural contexts. We use the final resource for the evaluation of relevant monolingual and multilingual masked language models. We find that language models in all languages favor sentences that express stereotypes in most bias categories. The process of creating a resource that covers a wide range of language types and cultural settings highlights the difficulty of bias evaluation, in particular comparability across languages and contexts.