CodeMixBench: Evaluating Code-Mixing Capabilities of LLMs Across 18 Languages

Yilun Yang, Yekun Chai


Abstract
Code-mixing, the practice of switching between languages within a conversation, poses unique challenges for traditional NLP. Existing benchmarks like LinCE and GLUECoS are limited by their narrow language pairs and tasks, failing to adequately assess large language models’ (LLMs) code-mixing abilities. Despite the recognized importance of code-mixing for multilingual users, research on LLMs in this context remains sparse. Additionally, current techniques for synthesizing code-mixed data are underdeveloped to generate code-mixing. In response, we introduce CodeMixBench, a comprehensive benchmark covering eight tasks, including three specific to LLMs and five traditional NLP tasks, and 18 languages from seven language families. We also propose a new method for generating large-scale synthetic code-mixed texts by combining word substitution with GPT-4 prompting. Our evaluation reveals consistent underperformance of LLMs on code-mixed datasets involving different language families. Enhancements in training data size, model scale, and few-shot learning could improve their performance. The code and dataset are available at https://github.com/Jeromeyluck/CodeMixBench.
Anthology ID:
2025.emnlp-main.109
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2139–2169
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.109/
DOI:
Bibkey:
Cite (ACL):
Yilun Yang and Yekun Chai. 2025. CodeMixBench: Evaluating Code-Mixing Capabilities of LLMs Across 18 Languages. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 2139–2169, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
CodeMixBench: Evaluating Code-Mixing Capabilities of LLMs Across 18 Languages (Yang & Chai, EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.109.pdf
Checklist:
 2025.emnlp-main.109.checklist.pdf