Anna Sallés
Also published as: Anna Salles
2026
EsBBQ and CaBBQ: The Spanish and Catalan Bias Benchmarks for Question Answering
Valle Ruiz-Fernández | Mario Mina | Júlia Falcão | Luis Antonio Vasquez Reina | Anna Salles | Aitor Gonzalez-Agirre | Olatz Perez-de-Viñaspre
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Valle Ruiz-Fernández | Mario Mina | Júlia Falcão | Luis Antonio Vasquez Reina | Anna Salles | Aitor Gonzalez-Agirre | Olatz Perez-de-Viñaspre
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Previous literature has largely shown that Large Language Models (LLMs) perpetuate social biases learnt from their pre-training data. Given the notable lack of resources for social bias evaluation in languages other than English, and for social contexts outside of the United States, this paper introduces the Spanish and the Catalan Bias Benchmarks for Question Answering (EsBBQ and CaBBQ). Based on the original BBQ, these two parallel datasets are designed to assess social bias across 10 categories using a multiple-choice QA setting, now adapted to the Spanish and Catalan languages and to the social context of Spain. We report evaluation results on different LLMs, factoring in model family, size and variant. Our results show that models tend to fail to choose the correct answer in ambiguous scenarios, and that high QA accuracy often correlates with greater reliance on social biases.
2025
IberoBench: A Benchmark for LLM Evaluation in Iberian Languages
Irene Baucells | Javier Aula-Blasco | Iria de-Dios-Flores | Silvia Paniagua Suárez | Naiara Perez | Anna Salles | Susana Sotelo Docio | Júlia Falcão | Jose Javier Saiz | Robiert Sepulveda Torres | Jeremy Barnes | Pablo Gamallo | Aitor Gonzalez-Agirre | German Rigau | Marta Villegas
Proceedings of the 31st International Conference on Computational Linguistics
Irene Baucells | Javier Aula-Blasco | Iria de-Dios-Flores | Silvia Paniagua Suárez | Naiara Perez | Anna Salles | Susana Sotelo Docio | Júlia Falcão | Jose Javier Saiz | Robiert Sepulveda Torres | Jeremy Barnes | Pablo Gamallo | Aitor Gonzalez-Agirre | German Rigau | Marta Villegas
Proceedings of the 31st International Conference on Computational Linguistics
The current best practice to measure the performance of base Large Language Models is to establish a multi-task benchmark that covers a range of capabilities of interest. Currently, however, such benchmarks are only available in a few high-resource languages. To address this situation, we present IberoBench, a multilingual, multi-task benchmark for Iberian languages (i.e., Basque, Catalan, Galician, European Spanish and European Portuguese) built on the LM Evaluation Harness framework. The benchmark consists of 62 tasks divided into 179 subtasks. We evaluate 33 existing LLMs on IberoBench on 0- and 5-shot settings. We also explore the issues we encounter when working with the Harness and our approach to solving them to ensure high-quality evaluation.
Multi-LMentry: Can Multilingual LLMs Solve Elementary Tasks Across Languages?
Luca Moroni | Javier Aula-Blasco | Simone Conia | Irene Baucells | Naiara Perez | Silvia Paniagua Suárez | Anna Sallés | Malte Ostendorff | Júlia Falcão | Guijin Son | Aitor Gonzalez-Agirre | Roberto Navigli | Marta Villegas
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Luca Moroni | Javier Aula-Blasco | Simone Conia | Irene Baucells | Naiara Perez | Silvia Paniagua Suárez | Anna Sallés | Malte Ostendorff | Júlia Falcão | Guijin Son | Aitor Gonzalez-Agirre | Roberto Navigli | Marta Villegas
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
As large language models (LLMs) continue to improve, their evaluation increasingly centers on complex, high-level tasks, often at the expense of systematically assessing fundamental capabilities. To address this gap, recent work proposed LMentry, a compact benchmark comprising tasks that are trivial for humans but remain surprisingly difficult for LLMs. However, LMentry is limited to English, leaving its insights linguistically narrow. In this paper, we present Multi-LMentry, a ground-up recreation of LMentry that enables systematic evaluation of LLMs on basic reasoning and understanding tasks across nine diverse languages. Multi-LMentry includes English and expands to Basque, Brazilian Portuguese, Catalan, Galician, German, Italian, Korean, and Spanish, emphasizing the importance of cross-lingual and low-resource settings. To validate that Multi-LMentry is still trivial for humans, we demonstrate that L2 speakers with only elementary proficiency achieve near-perfect scores in a low-resource language, namely, Basque. Through extensive experiments, we reveal that state-of-the-art open-weight multilingual LLMs still fall short of human performance on elementary tasks in many languages. Our results expose new failure modes that remain hidden in monolingual evaluation, underscoring the need for rigorous, language-diverse “unit tests” of core model abilities.
Search
Fix author
Co-authors
- Júlia Falcão 3
- Aitor González-Agirre 3
- Javier Aula-Blasco 2
- Irene Baucells 2
- Naiara Pérez 2
- Silvia Paniagua Suárez 2
- Marta Villegas 2
- Jeremy Barnes 1
- Simone Conia 1
- Pablo Gamallo 1
- Mario Mina 1
- Luca Moroni 1
- Roberto Navigli 1
- Malte Ostendorff 1
- Olatz Perez-de-Vinaspre 1
- German Rigau 1
- Valle Ruiz-Fernández 1
- José Javier Saiz 1
- Robiert Sepúlveda-Torres 1
- Guijin Son 1
- Susana Sotelo 1
- Luis Antonio Vasquez Reina 1
- Iria de-Dios-Flores 1