RoMath: A Mathematical Reasoning Benchmark in Romanian

Adrian Cosma, Ana-Maria Bucur, Emilian Radoi


Abstract
Mathematics has long been conveyed through natural language, primarily for human understanding. With the rise of mechanized mathematics and proof assistants, there is a growing need to understand informal mathematical text, yet most existing benchmarks focus solely on English, overlooking other languages. This paper introduces RoMath, a Romanian mathematical reasoning benchmark suite comprising three subsets: Baccalaureate, Competitions and Synthetic, which cover a range of mathematical domains and difficulty levels, aiming to improve non-English language models and promote multilingual AI development. By focusing on Romanian, a low-resource language with unique linguistic features, RoMath addresses the limitations of Anglo-centric models and emphasizes the need for dedicated resources beyond simple automatic translation. We benchmark several open-weight language models, highlighting the importance of creating resources for underrepresented languages. The code and datasets are available for research purposes.
Anthology ID:
2025.mathnlp-main.7
Volume:
Proceedings of The 3rd Workshop on Mathematical Natural Language Processing (MathNLP 2025)
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Marco Valentino, Deborah Ferreira, Mokanarangan Thayaparan, Leonardo Ranaldi, Andre Freitas
Venues:
MathNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
95–111
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.mathnlp-main.7/
DOI:
Bibkey:
Cite (ACL):
Adrian Cosma, Ana-Maria Bucur, and Emilian Radoi. 2025. RoMath: A Mathematical Reasoning Benchmark in Romanian. In Proceedings of The 3rd Workshop on Mathematical Natural Language Processing (MathNLP 2025), pages 95–111, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
RoMath: A Mathematical Reasoning Benchmark in Romanian (Cosma et al., MathNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.mathnlp-main.7.pdf