@inproceedings{pham-etal-2026-vn,
title = "{VN}-{MTEB}: {V}ietnamese Massive Text Embedding Benchmark",
author = "Pham, Loc and
Luu, Tung and
Vo, Thu and
Nguyen, Minh and
Hoang, Viet",
editor = "Demberg, Vera and
Inui, Kentaro and
Marquez, Llu{\'i}s",
booktitle = "Findings of the {A}ssociation for {C}omputational {L}inguistics: {EACL} 2026",
month = mar,
year = "2026",
address = "Rabat, Morocco",
publisher = "Association for Computational Linguistics",
url = "https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.86/",
pages = "1705--1725",
ISBN = "979-8-89176-386-9",
abstract = "Vietnam ranks among the top countries in terms of both internet traffic and online toxicity. As a result, implementing embedding models for recommendation and content control duties in applications is crucial. However, a lack of large-scale test datasets, both in volume and task diversity, makes it tricky for scientists to effectively evaluate AI models before deploying them in real-world, large-scale projects. To solve this important problem, we introduce a Vietnamese benchmark, VN-MTEB for embedding models, which we created by translating a large number of English samples from the Massive Text Embedding Benchmark using our new automated framework, thereby contributing an extension of the Massive Multilingual Text Embedding Benchmark with our additional Vietnamese tasks and datasets. We leverage the strengths of large language models (LLMs) and cutting-edge embedding models to conduct translation and filtering processes to retain high-quality samples, guaranteeing a natural flow of language and semantic fidelity while preserving named entity recognition (NER) and code snippets. Our comprehensive benchmark consists of 41 datasets from six tasks specifically designed for Vietnamese text embeddings. In our analysis, we find that bigger and more complex models using Rotary Positional Embedding outperform those using Absolute Positional Embedding in embedding tasks."
}Markdown (Informal)
[VN-MTEB: Vietnamese Massive Text Embedding Benchmark](https://preview.aclanthology.org/ingest-eacl/2026.findings-eacl.86/) (Pham et al., Findings 2026)
ACL
- Loc Pham, Tung Luu, Thu Vo, Minh Nguyen, and Viet Hoang. 2026. VN-MTEB: Vietnamese Massive Text Embedding Benchmark. In Findings of the Association for Computational Linguistics: EACL 2026, pages 1705–1725, Rabat, Morocco. Association for Computational Linguistics.