P-MMEval: A Parallel Multilingual Multitask Benchmark for Consistent Evaluation of LLMs

Yidan Zhang, Yu Wan, Boyi Deng, Baosong Yang, Hao-Ran Wei, Fei Huang, Bowen Yu, Dayiheng Liu, Junyang Lin, Fei Huang, Jingren Zhou


Abstract
Recent advancements in large language models (LLMs) showcase varied multilingual capabilities across tasks like translation, code generation, and reasoning. Previous assessments often limited their scope to fundamental natural language processing (NLP) or isolated capability-specific tasks. To alleviate this drawback, we aim to present a comprehensive multilingual multitask benchmark. First, we introduce P-MMEval, a large-scale benchmark covering fundamental and capability-specialized datasets. Furthermore, P-MMEval delivers consistent language coverage across various datasets and provides parallel samples. Finally, we conduct extensive experiments on representative multilingual model series to compare performances across models and tasks, explore the relationship between multilingual performances and factors such as tasks, model sizes, languages, and prompts, and examine the effectiveness of knowledge transfer from English to other languages. The resulting insights are intended to offer valuable guidance for future research.
Anthology ID:
2025.emnlp-main.242
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4809–4836
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.242/
DOI:
Bibkey:
Cite (ACL):
Yidan Zhang, Yu Wan, Boyi Deng, Baosong Yang, Hao-Ran Wei, Fei Huang, Bowen Yu, Dayiheng Liu, Junyang Lin, Fei Huang, and Jingren Zhou. 2025. P-MMEval: A Parallel Multilingual Multitask Benchmark for Consistent Evaluation of LLMs. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 4809–4836, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
P-MMEval: A Parallel Multilingual Multitask Benchmark for Consistent Evaluation of LLMs (Zhang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.242.pdf
Checklist:
 2025.emnlp-main.242.checklist.pdf