Daniel Vila-Suero


2025

pdf bib
Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation
Shivalika Singh | Angelika Romanou | Clémentine Fourrier | David Ifeoluwa Adelani | Jian Gang Ngui | Daniel Vila-Suero | Peerat Limkonchotiwat | Kelly Marchisio | Wei Qi Leong | Yosephine Susanto | Raymond Ng | Shayne Longpre | Sebastian Ruder | Wei-Yin Ko | Antoine Bosselut | Alice Oh | Andre Martins | Leshem Choshen | Daphne Ippolito | Enzo Ferrante | Marzieh Fadaee | Beyza Ermis | Sara Hooker
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Reliable multilingual evaluation is difficult, and culturally appropriate evaluation is even harder to achieve.A common practice to fill this gap is to machine-translate English evaluation sets. However, translation introduces language bias and carries over cultural and regional assumptions from the original questions – often testing knowledge irrelevant to the target audience. In this work, we highlight the extent and impact of these biases and present a multilingual evaluation framework that aims to mitigate them through improved translations and annotation practices.Through a large-scale study involving professional and community translators and annotators, we show that state-of-the-art models excel primarily by learning Western-centric concepts. Notably, we find that model rankings on the full MMLU change when evaluated on a subset of questions explicitly marked as culturally sensitive.We release Global MMLU, a multilingual extension of MMLU across 42 languages, featuring improved translation quality, expanded language coverage, and designated subsets labeled as culturally sensitive and culturally agnostic to enable a more comprehensive and equitable benchmark for evaluating language models across diverse linguistic and cultural contexts.

2021

pdf bib
System description for ProfNER - SMMH: Optimized finetuning of a pretrained transformer and word vectors
David Carreto Fidalgo | Daniel Vila-Suero | Francisco Aranda Montes | Ignacio Talavera Cepeda
Proceedings of the Sixth Social Media Mining for Health (#SMM4H) Workshop and Shared Task

This shared task system description depicts two neural network architectures submitted to the ProfNER track, among them the winning system that scored highest in the two sub-tasks 7a and 7b. We present in detail the approach, preprocessing steps and the architectures used to achieve the submitted results, and also provide a GitHub repository to reproduce the scores. The winning system is based on a transformer-based pretrained language model and solves the two sub-tasks simultaneously.

2015

pdf bib
Reconciling Heterogeneous Descriptions of Language Resources
John Philip McCrae | Philipp Cimiano | Victor Rodríguez Doncel | Daniel Vila-Suero | Jorge Gracia | Luca Matteis | Roberto Navigli | Andrejs Abele | Gabriela Vulcu | Paul Buitelaar
Proceedings of the 4th Workshop on Linked Data in Linguistics: Resources and Applications

2014

pdf bib
Enabling Language Resources to Expose Translations as Linked Data on the Web
Jorge Gracia | Elena Montiel-Ponsoda | Daniel Vila-Suero | Guadalupe Aguado-de-Cea
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

Language resources, such as multilingual lexica and multilingual electronic dictionaries, contain collections of lexical entries in several languages. Having access to the corresponding explicit or implicit translation relations between such entries might be of great interest for many NLP-based applications. By using Semantic Web-based techniques, translations can be available on the Web to be consumed by other (semantic enabled) resources in a direct manner, not relying on application-specific formats. To that end, in this paper we propose a model for representing translations as linked data, as an extension of the lemon model. Our translation module represents some core information associated to term translations and does not commit to specific views or translation theories. As a proof of concept, we have extracted the translations of the terms contained in Terminesp, a multilingual terminological database, and represented them as linked data. We have made them accessible on the Web both for humans (via a Web interface) and software agents (with a SPARQL endpoint).