MojoBench: Language Modeling and Benchmarks for Mojo

Nishat Raihan, Joanna C. S. Santos, Marcos Zampieri


Abstract
The recently introduced Mojo programming language (PL) by Modular, has received significant attention in the scientific community due to its claimed significant speed boost over Python. Despite advancements in code Large Language Models (LLMs) across various PLs, Mojo remains unexplored in this context. To address this gap, we introduce MojoBench, the first framework for Mojo code generation. MojoBench includes HumanEval-Mojo, a benchmark dataset designed for evaluating code LLMs on Mojo, and Mojo-Coder, the first LLM pretrained and finetuned for Mojo code generation, which supports instructions in 5 natural languages (NLs). Our results show that Mojo-Coder achieves a 30-35% performance improvement over leading models like GPT-4o and Claude-3.5-Sonnet. Furthermore, we provide insights into LLM behavior with underrepresented and unseen PLs, offering potential strategies for enhancing model adaptability. MojoBench contributes to our understanding of LLM capabilities and limitations in emerging programming paradigms fostering more robust code generation systems.
Anthology ID:
2025.findings-naacl.230
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4109–4128
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.230/
DOI:
Bibkey:
Cite (ACL):
Nishat Raihan, Joanna C. S. Santos, and Marcos Zampieri. 2025. MojoBench: Language Modeling and Benchmarks for Mojo. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 4109–4128, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
MojoBench: Language Modeling and Benchmarks for Mojo (Raihan et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.230.pdf