Mohammad Javad Ranjbar Kalahroodi
Also published as: Mohammad Javad Ranjbar
2026
PersianMedQA: Evaluating Large Language Models on a Persian-English Bilingual Medical Question Answering Benchmark
Mohammad Javad Ranjbar Kalahroodi | Amirhossein Sheikholselami | Sepehr Karimi Arpanahi | Sepideh Ranjbar Kalahroodi | Heshaam Faili | Azadeh Shakery
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Mohammad Javad Ranjbar Kalahroodi | Amirhossein Sheikholselami | Sepehr Karimi Arpanahi | Sepideh Ranjbar Kalahroodi | Heshaam Faili | Azadeh Shakery
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Large Language Models (LLMs) have achieved remarkable performance on a wide range of Natural Language Processing (NLP) benchmarks, often surpassing human-level accuracy. However, their reliability in high-stakes domains such as medicine, particularly in low-resource languages, remains underexplored. In this work, we introduce PersianMedQA, a large-scale dataset of 20,785 expert-validated multiple-choice Persian medical questions from 14 years of Iranian national medical exams, spanning 23 medical specialties and designed to evaluate LLMs in both Persian and English. We benchmark 41 state-of-the-art models, including general-purpose, Persian, and medical LLMs, in zero-shot and chain-of-thought (CoT) settings. Our results show that closed-weight general models (e.g., GPT-4.1) consistently outperform all other categories, achieving 83.09% accuracy in Persian and 80.7% in English, while Persian LLMs such as Dorna underperform significantly (e.g., 34.9% in Persian), often struggling with both instruction-following and domain reasoning. We also analyze the impact of translation, showing that while English performance is generally higher, 3-10% of questions can only be answered correctly in Persian due to cultural and clinical contextual cues that are lost in translation. Finally, we demonstrate that model size alone is insufficient for robust performance without strong domain or language adaptation. PersianMedQA provides a foundation for evaluating bilingual and culturally grounded medical reasoning in LLMs. The dataset, along with a bilingual medical dictionary, is publicly available at: https://huggingface.co/datasets/MohammadJRanjbar/PersianMedQA.
PersianPunc: A Large-Scale Dataset and BERT-Based Approach for Persian Punctuation Restoration
Mohammad Javad Ranjbar Kalahroodi | Heshaam Faili | Azadeh Shakery
The Proceedings of the First Workshop on NLP and LLMs for the Iranian Language Family
Mohammad Javad Ranjbar Kalahroodi | Heshaam Faili | Azadeh Shakery
The Proceedings of the First Workshop on NLP and LLMs for the Iranian Language Family
Punctuation restoration is essential for improving the readability and downstream utility of automatic speech recognition (ASR) outputs, yet remains underexplored for Persian despite its importance. We introduce PersianPunc, a large-scale, high-quality dataset of 17 million samples for Persian punctuation restoration, constructed through systematic aggregation and filtering of existing textual resources. We formulate punctuation restoration as a token-level sequence labeling task and fine-tune ParsBERT to achieve strong performance. Through comparative evaluation, we demonstrate that while large language models can perform punctuation restoration, they suffer from critical limitations: over-correction tendencies that introduce undesired edits beyond punctuation insertion (particularly problematic for speech-to-text pipelines) and substantially higher computational requirements. Our lightweight BERT-based approach achieves a macro-averaged F1 score of 91.33% on our test set while maintaining efficiency suitable for real-time applications. We make our dataset and model publicly available to facilitate future research in Persian NLP and provide a scalable framework applicable to other morphologically rich, low-resource languages.
APARSIN: A Multi-Variety Sentiment and Translation Benchmark for Iranic Languages
Sadegh Jafari | Tara Azin | Farhad Roodi | Zahra Dehghani Tafti | Mehrdad Ghadrdan | Elham Vatankhahan Esfahani | Aylin Naebzadeh | Mohammadhadi Shahhosseini | Ghafoor Khan | Kazem Forghani | Danial Namazi | Seyed Mohammad Hossein Hashemi | Farhan Farsi | Mohammad Osoolian | Maede Mohammadi | Mohammad Erfan Zare | Muhammad Hasnain Khan | Muhammad Hussain | Nooreen Zaki | Joma Mohammadi | Shayan Bali | Mohammad Javad Ranjbar | Els Lefever | Veronique Hoste
The Proceedings of the First Workshop on NLP and LLMs for the Iranian Language Family
Sadegh Jafari | Tara Azin | Farhad Roodi | Zahra Dehghani Tafti | Mehrdad Ghadrdan | Elham Vatankhahan Esfahani | Aylin Naebzadeh | Mohammadhadi Shahhosseini | Ghafoor Khan | Kazem Forghani | Danial Namazi | Seyed Mohammad Hossein Hashemi | Farhan Farsi | Mohammad Osoolian | Maede Mohammadi | Mohammad Erfan Zare | Muhammad Hasnain Khan | Muhammad Hussain | Nooreen Zaki | Joma Mohammadi | Shayan Bali | Mohammad Javad Ranjbar | Els Lefever | Veronique Hoste
The Proceedings of the First Workshop on NLP and LLMs for the Iranian Language Family
The Iranic language family includes many underrepresented languages and dialects that remain largely unexplored in modern NLP research. We introduce APARSIN, a multi-variety benchmark covering 14 Iranic languages, dialects, and accents, designed for sentiment analysis and machine translation. The dataset includes both high and low-resource varieties, several of which are endangered, capturing linguistic variation across them. We evaluate a set of instruction-tuned Large Language Models (LLMs) on these tasks and analyze their performance across the varieties. Our results highlight substantial performance gaps between standard Persian and other Iranic languages and dialects, demonstrating the need for more inclusive multilingual and dialectally diverse NLP benchmarks.
Search
Fix author
Co-authors
- Heshaam Faili 2
- Azadeh Shakery 2
- Tara Azin 1
- Shayan Bali 1
- Elham Vatankhahan Esfahani 1
- Farhan Farsi 1
- Kazem Forghani 1
- Mehrdad Ghadrdan 1
- Seyed Mohammad Hossein Hashemi 1
- Veronique Hoste 1
- Muhammad Hussain 1
- Sadegh Jafari 1
- Sepehr Karimi Arpanahi 1
- Ghafoor Khan 1
- Muhammad Hasnain Khan 1
- Els Lefever 1
- Maede Mohammadi 1
- Joma Mohammadi 1
- Aylin Naebzadeh 1
- Danial Namazi 1
- Mohammad Osoolian 1
- Sepideh Ranjbar Kalahroodi 1
- Farhad Roodi 1
- Mohammadhadi Shahhosseini 1
- Amirhossein Sheikholselami 1
- Zahra Dehghani Tafti 1
- Nooreen Zaki 1
- Mohammad Erfan Zare 1