PerHalluEval: Persian Hallucination Evaluation Benchmark for Large Language Models

Mohammad Hosseini, Kimia Hosseini, Shayan Bali, Zahra Zanjani, Saeedeh Momtazi


Abstract
Hallucination is a persistent issue affecting all large language Models (LLMs), particularly within low-resource languages such as Persian. PerHalluEval (Persian Hallucination Evaluation) is the first dynamic hallucination evaluation benchmark tailored for the Persian language. Our benchmark leverages a three-stage LLM-driven pipeline, augmented with human validation, to generate plausible answers and summaries regarding QA and summarization tasks, focusing on detecting extrinsic and intrinsic hallucinations. Moreover, we used the log probabilities of generated tokens to select the most believable hallucinated instances. In addition, we engaged human annotators to highlight Persian-specific contexts in the QA dataset in order to evaluate LLMs’ performance on content specifically related to Persian culture. Our evaluation of 12 LLMs, including open- and closed-source models using PerHalluEval, revealed that the models generally struggle in detecting hallucinated Persian text. We showed that providing external knowledge, i.e., the original document for the summarization task, could mitigate hallucination partially. Furthermore, there was no significant difference in terms of hallucination when comparing LLMs specifically trained for Persian with others.
Anthology ID:
2026.lrec-main.243
Volume:
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Month:
May
Year:
2026
Address:
Palma de Mallorca, Spain
Editors:
Stelios Piperidis, Núria Bel, Henk van den Heuvel, Nancy Ide, Simon Krek, Antonio Toral
Venue:
LREC
SIG:
Publisher:
ELRA Language Resource Association
Note:
Pages:
3107–3127
Language:
URL:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.243/
DOI:
Bibkey:
Cite (ACL):
Mohammad Hosseini, Kimia Hosseini, Shayan Bali, Zahra Zanjani, and Saeedeh Momtazi. 2026. PerHalluEval: Persian Hallucination Evaluation Benchmark for Large Language Models. International Conference on Language Resources and Evaluation, main:3107–3127.
Cite (Informal):
PerHalluEval: Persian Hallucination Evaluation Benchmark for Large Language Models (Hosseini et al., LREC 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.243.pdf