Mehran Sarmadi
2025
FaMTEB: Massive Text Embedding Benchmark in Persian Language
Erfan Zinvandi
|
Morteza Alikhani
|
Mehran Sarmadi
|
Zahra Pourbahman
|
Sepehr Arvin
|
Reza Kazemi
|
Arash Amini
Findings of the Association for Computational Linguistics: EMNLP 2025
In this paper, we introduce a comprehensive benchmark for Persian (Farsi) text embeddings, built upon the Massive Text Embedding Benchmark (MTEB). Our benchmark includes 63 datasets spanning seven different tasks: classification, clustering, pair classification, reranking, retrieval, summary retrieval, and semantic textual similarity. The datasets are a combination of existing, translated, and newly generated (synthetic) data, offering a diverse and robust evaluation framework for Persian language models. All newly translated and synthetic datasets were rigorously evaluated by both humans and automated systems to ensure high quality and reliability. Given the growing adoption of text embedding models in chatbots, evaluation datasets are becoming an essential component of chatbot development and Retrieval-Augmented Generation (RAG) systems. As a contribution, we include chatbot evaluation datasets in the MTEB benchmark for the first time. Additionally, we introduce the novel task of summary retrieval, which is not included in the standard MTEB tasks. Another key contribution of this work is the introduction of a substantial number of new Persian-language NLP datasets for both training and evaluation, many of which have no existing counterparts in Persian. We evaluate the performance of several Persian and multilingual embedding models across a wide range of tasks. This work presents an open-source benchmark with datasets, accompanying code, and a public leaderboard.
Search
Fix author
Co-authors
- Morteza Alikhani 1
- Arash Amini 1
- Sepehr Arvin 1
- Reza Kazemi 1
- Zahra Pourbahman 1
- show all...