Metric Calculating Benchmark: Code-Verifiable Complicate Instruction Following Benchmark for Large Language Models

Hyeonseok Moon, Seongtae Hong, Jaehyung Seo, Heuiseok Lim


Abstract
Recent frontier-level LLMs have saturated many previously difficult benchmarks, leaving little room for further differentiation. This progress highlights the need for challenging benchmarks that provide objective verification. In this paper, we introduce MCBench, a benchmark designed to evaluate whether LLMs can execute string-matching NLP metrics by strictly following step-by-step instructions. Unlike prior benchmarks that depend on subjective judgments or general reasoning, MCBench offers an objective, deterministic and code-verifiable evaluation. This setup allows us to systematically test whether LLMs can maintain accurate step-by-step execution, including instruction adherence, numerical computation, and long-range consistency in handling intermediate results. To ensure objective evaluation of these abilities, we provide a parallel reference code that can evaluate the accuracy of LLM output. We provide three evaluative metrics and three benchmark variants designed to measure the detailed instruction understanding capability of LLMs. Our analyses show that MCBench serves as an effective and objective tool for evaluating the capabilities of cutting-edge LLMs
Anthology ID:
2025.emnlp-main.1051
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20820–20834
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1051/
DOI:
Bibkey:
Cite (ACL):
Hyeonseok Moon, Seongtae Hong, Jaehyung Seo, and Heuiseok Lim. 2025. Metric Calculating Benchmark: Code-Verifiable Complicate Instruction Following Benchmark for Large Language Models. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 20820–20834, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Metric Calculating Benchmark: Code-Verifiable Complicate Instruction Following Benchmark for Large Language Models (Moon et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1051.pdf
Checklist:
 2025.emnlp-main.1051.checklist.pdf