VN-MTEB: Vietnamese Massive Text Embedding Benchmark

Loc Pham, Tung Luu, Thu Vo, Minh Nguyen, Viet Hoang


Abstract
Vietnam ranks among the top countries in terms of both internet traffic and online toxicity. As a result, implementing embedding models for recommendation and content control duties in applications is crucial. However, a lack of large-scale test datasets, both in volume and task diversity, makes it tricky for scientists to effectively evaluate AI models before deploying them in real-world, large-scale projects. To solve this important problem, we introduce a Vietnamese benchmark, VN-MTEB for embedding models, which we created by translating a large number of English samples from the Massive Text Embedding Benchmark using our new automated framework, thereby contributing an extension of the Massive Multilingual Text Embedding Benchmark with our additional Vietnamese tasks and datasets. We leverage the strengths of large language models (LLMs) and cutting-edge embedding models to conduct translation and filtering processes to retain high-quality samples, guaranteeing a natural flow of language and semantic fidelity while preserving named entity recognition (NER) and code snippets. Our comprehensive benchmark consists of 41 datasets from six tasks specifically designed for Vietnamese text embeddings. In our analysis, we find that bigger and more complex models using Rotary Positional Embedding outperform those using Absolute Positional Embedding in embedding tasks.
Anthology ID:
2026.findings-eacl.86
Volume:
Findings of the Association for Computational Linguistics: EACL 2026
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1705–1725
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.86/
DOI:
Bibkey:
Cite (ACL):
Loc Pham, Tung Luu, Thu Vo, Minh Nguyen, and Viet Hoang. 2026. VN-MTEB: Vietnamese Massive Text Embedding Benchmark. In Findings of the Association for Computational Linguistics: EACL 2026, pages 1705–1725, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
VN-MTEB: Vietnamese Massive Text Embedding Benchmark (Pham et al., Findings 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.86.pdf
Checklist:
 2026.findings-eacl.86.checklist.pdf