mSCoRe: A Multilingual and Scalable Benchmark for Skill-based Commonsense Reasoning

Nghia Trung Ngo, Franck Dernoncourt, Thien Huu Nguyen


Abstract
Recent advancements in reasoning-reinforced Large Language Models (LLMs) have shown remarkable capabilities in complex reasoning tasks. However, the mechanism underlying their utilization of different human reasoning skills remains poorly investigated, especially for multilingual commonsense reasoning that involves everyday knowledge across different languages and cultures. To address this gap, we propose a Multilingual and Scalable Benchmark for Skill-based Commonsense Reasoning (mSCoRe). Our benchmark incorporates three key components that are designed to systematically evaluate LLM’s reasoning capabilities, including: (1) a novel taxonomy of reasoning skills that enables fine-grained analysis of models’ reasoning processes, (2) a robust data synthesis pipeline tailored specifically for commonsense reasoning evaluation, and (3) a complexity scaling framework allowing task difficulty to scale dynamically alongside future improvements in LLM abilities. Extensive experiments on eights state-of-the-art LLMs of varying sizes and training approaches demonstrate that mSCoRe remains significantly challenging for current models, particularly at higher complexity levels. Our results reveal the limitations of such reasoning-reinforced models when confronted with nuanced multilingual general and cultural commonsense. We further provide detailed analysis on the models’ reasoning processes, suggesting future directions for improving multilingual commonsense reasoning capabilities.
Anthology ID:
2026.lrec-main.399
Volume:
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Month:
May
Year:
2026
Address:
Palma de Mallorca, Spain
Editors:
Stelios Piperidis, Núria Bel, Henk van den Heuvel, Nancy Ide, Simon Krek, Antonio Toral
Venue:
LREC
SIG:
Publisher:
ELRA Language Resource Association
Note:
Pages:
5095–5115
Language:
URL:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.399/
DOI:
Bibkey:
Cite (ACL):
Nghia Trung Ngo, Franck Dernoncourt, and Thien Huu Nguyen. 2026. mSCoRe: A Multilingual and Scalable Benchmark for Skill-based Commonsense Reasoning. International Conference on Language Resources and Evaluation, main:5095–5115.
Cite (Informal):
mSCoRe: A Multilingual and Scalable Benchmark for Skill-based Commonsense Reasoning (Ngo et al., LREC 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.399.pdf