The Russian-focused embedders’ exploration: ruMTEB benchmark and Russian embedding model design

Artem Snegirev, Maria Tikhonova, Maksimova Anna, Alena Fenogenova, Aleksandr Abramov


Abstract
Embedding models play a crucial role in Natural Language Processing (NLP) by creating text embeddings used in various tasks such as information retrieval and assessing semantic text similarity. This paper focuses on research related to embedding models in the Russian language. It introduces a new Russian-focused embedding model called ru-en-RoSBERTa and the ruMTEB benchmark, the Russian version extending the Massive Text Embedding Benchmark (MTEB). Our benchmark includes seven categories of tasks, such as semantic textual similarity, text classification, reranking, and retrieval.The research also assesses a representative set of Russian and multilingual models on the proposed benchmark. The findings indicate that the new model achieves results that are on par with state-of-the-art models in Russian. We release the model ru-en-RoSBERTa, and the ruMTEB framework comes with open-source code, integration into the original framework and a public leaderboard.
Anthology ID:
2025.naacl-long.12
Volume:
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
236–254
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.naacl-long.12/
DOI:
Bibkey:
Cite (ACL):
Artem Snegirev, Maria Tikhonova, Maksimova Anna, Alena Fenogenova, and Aleksandr Abramov. 2025. The Russian-focused embedders’ exploration: ruMTEB benchmark and Russian embedding model design. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 236–254, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
The Russian-focused embedders’ exploration: ruMTEB benchmark and Russian embedding model design (Snegirev et al., NAACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.naacl-long.12.pdf