NumPert: Numerical Perturbations to Probe Language Models for Veracity Prediction

Peter Røysland Aarnes, Vinay Setty


Abstract
Large language models show strong performance on knowledge intensive tasks such as fact checking and question answering, yet they often struggle with numerical reasoning. We present a systematic evaluation of state-of-the-art models for veracity prediction on numerical claims and evidence pairs using controlled perturbations, including label flipping probes, to test robustness. Our results indicate that even leading proprietary systems experience accuracy drops of up to 62% under certain perturbations. No model proves to be robust across all conditions. We further find that increasing context length generally reduces accuracy, but when extended context is enriched with perturbed demonstrations, most models substantially recover. These findings highlight critical limitations in numerical fact-checking and suggest that robustness remains an open challenge for current language models.
Anthology ID:
2025.ijcnlp-srw.8
Volume:
The 14th International Joint Conference on Natural Language Processing and The 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Month:
December
Year:
2025
Address:
Mumbai, India
Editors:
Santosh T.y.s.s, Shuichiro Shimizu, Yifan Gong
Venue:
IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
78–95
Language:
URL:
https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.ijcnlp-srw.8/
DOI:
Bibkey:
Cite (ACL):
Peter Røysland Aarnes and Vinay Setty. 2025. NumPert: Numerical Perturbations to Probe Language Models for Veracity Prediction. In The 14th International Joint Conference on Natural Language Processing and The 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics, pages 78–95, Mumbai, India. Association for Computational Linguistics.
Cite (Informal):
NumPert: Numerical Perturbations to Probe Language Models for Veracity Prediction (Aarnes & Setty, IJCNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-ijcnlp-aacl/2025.ijcnlp-srw.8.pdf