2025
pdf
bib
abs
Tooka-SBERT: Lightweight Sentence Embedding models for Persian
Ghazal Zamaninejad
|
MohammadAli SadraeiJavaheri
|
Farnaz Aghababaloo
|
Hamideh Rafiee
|
Milad Molazadeh Oskuee
|
AmirMohammad Salehoof
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
We introduce Tooka-SBERT, a family of Persian sentence embedding models designed to enhance semantic understanding for Persian. The models are released in two sizes—Small (123M parameters) and Large (353M parameters)—both built upon the TookaBERT backbone. Tooka-SBERT is pretrained on the Targoman News corpus and fine-tuned using high-quality synthetic Persian sentence pair datasets to improve semantic alignment. We evaluate Tooka-SBERT on PTEB, a Persian adaptation of the MTEB benchmark, where the Large model achieves an average score of 70.54% and the Small model 69.49%, outperforming some strong multilingual baselines. Tooka-SBERT provides a compact and high-performing open-source solution for Persian sentence representation, with efficient inference suitable for both GPU and CPU environments. Our models are publicly available on Hugging Face, and the corresponding benchmark results can be viewed on the PTEB Leaderboard.
pdf
bib
abs
MELAC: Massive Evaluation of Large Language Models with Alignment of Culture in Persian Language
Farhan Farsi
|
Farnaz Aghababaloo
|
Shahriar Shariati Motlagh
|
Parsa Ghofrani
|
MohammadAli SadraeiJavaheri
|
Shayan Bali
|
Amir Hossein Shabani
|
Farbod Bijary
|
Ghazal Zamaninejad
|
AmirMohammad Salehoof
|
Saeedeh Momtazi
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
As large language models (LLMs) become increasingly embedded in our daily lives, evaluating their quality and reliability across diverse contexts has become essential. While comprehensive benchmarks exist for assessing LLM performance in English, there remains a significant gap in evaluation resources for other languages. Moreover, because most LLMs are trained primarily on data rooted in European and American cultures, they often lack familiarity with non-Western cultural contexts. To address this limitation, our study focuses on the Persian language and Iranian culture. We introduce 19 new evaluation datasets specifically designed to assess LLMs on topics such as Iranian law, Persian grammar, Persian idioms, and university entrance exams. Using these datasets, we benchmarked 41 prominent LLMs, aiming to bridge the existing cultural and linguistic evaluation gap in the field. The evaluation results are publicly available on our live leaderboard: https://huggingface.co/spaces/opll-org/Open-Persian-LLM-Leaderboard
pdf
bib
abs
NLPART at SemEval-2025 Task 4: Forgetting is harder than Learning
Hoorieh Sabzevari
|
Milad Molazadeh Oskuee
|
Tohid Abedini
|
Ghazal Zamaninejad
|
Sara Baruni
|
Zahra Amirmahani
|
Amirmohammad Salehoof
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
Unlearning is a critical capability for ensuring privacy, security, and compliance in AI systems, enabling models to forget specific data while retaining overall performance. In this work, we participated in Task 4 of SemEval 2025, which focused on unlearning across three sub-tasks: (1) long-form synthetic creative documents, (2) short-form synthetic biographies containing personally identifiable information, and (3) real documents sampled from the target model’s training dataset. We conducted four experiments, employing Supervised Fine-Tuning (SFT) and Negative Preference Optimization (NPO). Despite achieving good performance on the retain set—data that the model was supposed to remember—our findings demonstrate that these techniques did not perform well on the forget set, where unlearning was required.
2023
pdf
bib
abs
IUST at ImageArg: The First Shared Task in Multimodal Argument Mining
Melika Nobakhtian
|
Ghazal Zamaninejad
|
Erfan Moosavi Monazzah
|
Sauleh Eetemadi
Proceedings of the 10th Workshop on Argument Mining
ImageArg is a shared task at the 10th ArgMining Workshop at EMNLP 2023. It leverages the ImageArg dataset to advance multimodal persuasiveness techniques. This challenge comprises two distinct subtasks: 1) Argumentative Stance (AS) Classification: Assessing whether a given tweet adopts an argumentative stance. 2) Image Persuasiveness (IP) Classification: Determining if the tweet image enhances the persuasive quality of the tweet. We conducted various experiments on both subtasks and ranked sixth out of the nine participating teams.
pdf
bib
abs
ROZAM at SemEval 2023 Task 9: Multilingual Tweet Intimacy Analysis
Mohammadmostafa Rostamkhani
|
Ghazal Zamaninejad
|
Sauleh Eetemadi
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
We build a model using large multilingual pretrained language model XLM-T for regression task and fine-tune it on the MINT (Multilingual INTmacy) analysis dataset which covers 6 languages for training and 4 languages for testing zero-shot performance of the model. The dataset was annotated and the annotations are intimacy scores. We experiment with several deep learning architectures to predict intimacy score. To achieve optimal performance we modify several model settings including loss function, number and type of layers. In total, we ran 16 end-to-end experiments. Our best system achieved a Pearson Correlation score of 0.52.