Multilingual Self-Taught Faithfulness Evaluators

Carlo Alfano, Aymen Al Marjani, Zeno Jonke, Amin Mantrach, Saab Mansour, Marcello Federico


Abstract
The growing use of large language models (LLMs) has increased the need for automatic evaluation systems, particularly to address the challenge of information hallucination. Although existing faithfulness evaluation approaches have shown promise, they are predominantly English-focused and often require expensive human-labeled training data for fine-tuning specialized models. As LLMs see increased adoption in multilingual contexts, there is a need for accurate faithfulness evaluators that can operate across languages without extensive labeled data. This paper presents STEMF (Self-Taught Evaluators for Multilingual Faithfulness), a framework that learns exclusively from synthetic multilingual data while leveraging cross-lingual transfer learning. Through experiments comparing language-specific and mixed-language fine-tuning approaches, we demonstrate a consistent relationship between an LLM’s general language capabilities and its performance in language-specific evaluation tasks. Our framework shows improvements over existing baselines, including state-of-the-art English evaluators and machine translation-based approaches.
Anthology ID:
2026.findings-eacl.266
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5035–5051
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.266/
DOI:
Bibkey:
Cite (ACL):
Carlo Alfano, Aymen Al Marjani, Zeno Jonke, Amin Mantrach, Saab Mansour, and Marcello Federico. 2026. Multilingual Self-Taught Faithfulness Evaluators. In Findings of the Association for Computational Linguistics: EACL 2026, pages 5035–5051, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Multilingual Self-Taught Faithfulness Evaluators (Alfano et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.266.pdf
Checklist:
 2026.findings-eacl.266.checklist.pdf