Eneko Sagarzazu
2025
Truth Knows No Language: Evaluating Truthfulness Beyond English
Blanca Calvo Figueras
|
Eneko Sagarzazu
|
Julen Etxaniz
|
Jeremy Barnes
|
Pablo Gamallo
|
Iria de-Dios-Flores
|
Rodrigo Agerri
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
We introduce a professionally translated extension of the TruthfulQA benchmark designed to evaluate truthfulness in Basque, Catalan, Galician, and Spanish. Truthfulness evaluations of large language models (LLMs) have primarily been focused on English. However, the ability of LLMs to maintain truthfulness across languages remains under-explored. Our study evaluates 12 state-of-the-art open LLMs, comparing base and instruction-tuned models using human evaluation, multiple-choice metrics, and LLM-as-a-Judge scoring. Our findings reveal that, while LLMs perform best in English and worst in Basque (the lowest-resourced language), overall truthfulness discrepancies across languages are smaller than anticipated. Furthermore, we show that LLM-as-a-Judge correlates more closely with human judgments than multiple-choice metrics, and that informativeness plays a critical role in truthfulness assessment. Our results also indicate that machine translation provides a viable approach for extending truthfulness benchmarks to additional languages, offering a scalable alternative to professional translation. Finally, we observe that universal knowledge questions are better handled across languages than context- and time-dependent ones, highlighting the need for truthfulness evaluations that account for cultural and temporal variability. Datasets, models and code are publicly available under open licenses.
Search
Fix author
Co-authors
- Rodrigo Agerri 1
- Jeremy Barnes 1
- Blanca Calvo Figueras 1
- Julen Etxaniz 1
- Pablo Gamallo 1
- show all...
Venues
- acl1