LCHAIM - Investigating Long Context Reasoning in Hebrew

Ehud Malul, Oriel Perets, Ziv Mor, Yigal Kassel, Elior Sulem


Abstract
Natural Language Inference (NLI) has gained significant attention recently due to its importance in understanding how machines comprehend and reason about language. While English has received tremendous interest, Morphologically Rich Languages (MRLs) like Hebrew, require more research. In this paper, we address the evaluation of Hebrew NLI models by introducing LCHAIM, a dataset designed to evaluate these models on tasks involving long premises and complex reasoning. The dataset, created by translating and validating the English ConTRoL dataset, consists of 8,325 context-hypothesis pairs that require coreferential, temporal, logical and analytical reasoning. Our experiments show the difficulty of contextual reasoning in Hebrew, as evidenced by the performance of different models. Fine-tuning the LongHero model on both the shorter premise Hebrew NLI and the LCHAIM datasets yielded a mean accuracy of 52%, that is 35% less than human performance. Similarly, Large language Models (LLMs) like Gemma-9B, Dicta-LM-2.0-7B, and GPT-4o achieved a top mean accuracy of 60.12% in few-shot setting.
Anthology ID:
2025.findings-acl.413
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7928–7939
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.413/
DOI:
Bibkey:
Cite (ACL):
Ehud Malul, Oriel Perets, Ziv Mor, Yigal Kassel, and Elior Sulem. 2025. LCHAIM - Investigating Long Context Reasoning in Hebrew. In Findings of the Association for Computational Linguistics: ACL 2025, pages 7928–7939, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
LCHAIM - Investigating Long Context Reasoning in Hebrew (Malul et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.413.pdf