Askhat Sametov


2025

pdf bib
Qorǵau: Evaluating Safety in Kazakh-Russian Bilingual Contexts
Maiya Goloburda | Nurkhan Laiyk | Diana Turmakhan | Yuxia Wang | Mukhammed Togmanov | Jonibek Mansurov | Askhat Sametov | Nurdaulet Mukhituly | Minghan Wang | Daniil Orel | Zain Muhammad Mujahid | Fajri Koto | Timothy Baldwin | Preslav Nakov
Findings of the Association for Computational Linguistics: ACL 2025

Large language models (LLMs) are known to have the potential to generate harmful content, posing risks to users. While significant progress has been made in developing taxonomies for LLM risks and safety evaluation prompts, most studies have focused on monolingual contexts, primarily in English. However, language- and region-specific risks in bilingual contexts are often overlooked, and core findings can diverge from those in monolingual settings. In this paper, we introduce Qorǵau, a novel dataset specifically designed for safety evaluation in Kazakh and Russian, reflecting the unique bilingual context in Kazakhstan, where both Kazakh (a low-resource language) and Russian (a high-resource language) are spoken. Experiments with both multilingual and language-specific LLMs reveal notable differences in safety performance, emphasizing the need for tailored, region-specific datasets to ensure the responsible and safe deployment of LLMs in countries like Kazakhstan. Warning: this paper contains example data that may be offensive, harmful, or biased.