George Turkiyyah
2025
ALARB: An Arabic Legal Argument Reasoning Benchmark
Harethah Abu Shairah
|
Somayah S. Alharbi
|
Abdulaziz A. AlHussein
|
Sameer Alsabea
|
Omar Shaqaqi
|
Hebah A. Alshamlan
|
Omar Knio
|
George Turkiyyah
Proceedings of The Third Arabic Natural Language Processing Conference
We introduce ALARB, a dataset and suite of tasks designed to evaluate the reasoning capabilities of large language models (LLMs) within the Arabic legal domain. While existing Arabic benchmarks cover some knowledge-intensive tasks such as retrieval and understanding, substantial datasets focusing specifically on multistep reasoning for Arabic LLMs, especially in open-ended contexts, are lacking. The dataset comprises over 13K commercial court cases from Saudi Arabia, with each case including the facts presented, the reasoning of the court, the verdict, as well the cited clauses extracted from the regulatory documents. We define a set of challenging tasks leveraging this dataset and reflecting the complexity of real-world legal reasoning, including verdict prediction, completion of reasoning chains in multistep legal arguments, and identification of relevant regulations based on case facts. We benchmark a representative selection of current open and closed Arabic LLMs on these tasks and demonstrate the dataset’s utility for instruction tuning. Notably, we show that instruction tuning a modest 12B parameter model using ALARB significantly enhances its performance in verdict prediction and Arabic verdict generation, reaching a level comparable to that of GPT-4o.
2024
ArabLegalEval: A Multitask Benchmark for Assessing Arabic Legal Knowledge in Large Language Models
Faris Hijazi
|
Somayah Alharbi
|
Abdulaziz AlHussein
|
Harethah Shairah
|
Reem Alzahrani
|
Hebah Alshamlan
|
George Turkiyyah
|
Omar Knio
Proceedings of the Second Arabic Natural Language Processing Conference
The rapid advancements in Large Language Models (LLMs) have led to significant improvements in various natural language processing tasks. However, the evaluation of LLMs’ legal knowledge, particularly in non English languages such as Arabic, remains under-explored. To address this gap, we introduce ArabLegalEval, a multitask benchmark dataset for assessing the Arabic legal knowledge of LLMs. Inspired by the MMLU and LegalBench datasets, ArabLegalEval consists of multiple tasks sourced from Saudi legal documents and synthesized questions. In this work, we aim to analyze the capabilities required to solve legal problems in Arabic and benchmark the performance of state-of-the-art LLMs. We explore the impact of in-context learning on performance and investigate various evaluation methods. Additionally, we explore workflows for automatically generating questions with automatic validation to enhance the dataset’s quality. By releasing ArabLegalEval and our code, we hope to accelerate AI research in the Arabic Legal domain