Kavsar Huseynova
2025
TUMLU: A Unified and Native Language Understanding Benchmark for Turkic Languages
Jafar Isbarov
|
Arofat Akhundjanova
|
Mammad Hajili
|
Kavsar Huseynova
|
Dmitry Gaynullin
|
Anar Rzayev
|
Osman Tursun
|
Aizirek Turdubaeva
|
Ilshat Saetov
|
Rinat Kharisov
|
Saule Belginova
|
Ariana Kenbayeva
|
Amina Alisheva
|
Abdullatif Köksal
|
Samir Rustamov
|
Duygu Ataman
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Being able to thoroughly assess massive multi-task language understanding (MMLU) capabilities is essential for advancing the applicability of multilingual language models. However, preparing such benchmarks in high quality native language is often costly and therefore limits the representativeness of evaluation datasets. While recent efforts focused on building more inclusive MMLU benchmarks, these are conventionally built using machine translation from high-resource languages, which may introduce errors and fail to account for the linguistic and cultural intricacies of the target languages. In this paper, we address the lack of native language MMLU benchmark especially in the under-represented Turkic language family with distinct morphosyntactic and cultural characteristics. We propose two benchmarks for Turkic language MMLU: TUMLU is a comprehensive, multilingual, and natively developed language understanding benchmark specifically designed for Turkic languages. It consists of middle- and high-school level questions spanning 11 academic subjects in Azerbaijani, Crimean Tatar, Karakalpak, Kazakh, Kyrgyz, Tatar, Turkish, Uyghur, and Uzbek. We also present TUMLU-mini, a more concise, balanced, and manually verified subset of the dataset. Using this dataset, we systematically evaluate a diverse range of open and proprietary multilingual large language models (LLMs), including Claude, Gemini, GPT, and LLaMA, offering an in-depth analysis of their performance across different languages, subjects, and alphabets. To promote further research and development in multilingual language understanding, we release TUMLU-mini and all corresponding evaluation scripts.
2024
Open foundation models for Azerbaijani language
Jafar Isbarov
|
Kavsar Huseynova
|
Elvin Mammadov
|
Mammad Hajili
|
Duygu Ataman
Proceedings of the First Workshop on Natural Language Processing for Turkic Languages (SIGTURK 2024)
The emergence of multilingual large language models has enabled the development of language understanding and generation systems in Azerbaijani. However, most of the production-grade systems rely on cloud solutions, such as GPT-4. While there have been several attempts to develop open foundation models for Azerbaijani, these works have not found their way into common use due to a lack of systemic benchmarking. This paper encompasses several lines of work that promote open-source foundation models for Azerbaijani. We introduce (1) a large text corpus for Azerbaijani, (2) a family of encoder-only language models trained on this dataset, (3) labeled datasets for evaluating these models, and (4) extensive evaluation that covers all major open-source models with Azerbaijani support.