Yuli Yang
2025
Evaluating Robustness of LLMs to Numerical Variations in Mathematical Reasoning
Yuli Yang
|
Hiroaki Yamada
|
Takenobu Tokunaga
The Sixth Workshop on Insights from Negative Results in NLP
Evaluating an LLM’s robustness against numerical perturbation is a good way to know if the LLM actually performs reasoning or just replicates patterns learned. We propose a novel method to augment math word problems (MWPs), producing numerical variations at a large scale utilizing templates. We also propose an automated error classification framework for scalable error analysis, distinguishing calculation errors from reasoning errors. Our experiments using the methods show LLMs are weak against numerical variations, suggesting they are not fully capable of generating valid reasoning steps, often failing in arithmetic operations.