Yuntian Gu
2025
How Numerical Precision Affects Arithmetical Reasoning Capabilities of LLMs
Guhao Feng
|
Kai Yang
|
Yuntian Gu
|
Xinyue Ai
|
Shengjie Luo
|
Jiacheng Sun
|
Di He
|
Zhenguo Li
|
Liwei Wang
Findings of the Association for Computational Linguistics: ACL 2025
Despite the remarkable success of transformer-based large language models (LLMs) across various domains, understanding and enhancing their mathematical capabilities remains a significant challenge. In this paper, we conduct a rigorous theoretical analysis of LLMs’ mathematical abilities, with a specific focus on their arithmetic performances. We identify numerical precision as a key factor that influences their effectiveness in arithmetical tasks. Our results show that Transformers operating with low numerical precision fail to address arithmetic tasks, such as iterated addition and integer multiplication, unless the model size grows super-polynomially with respect to the input length. In contrast, Transformers with standard numerical precision can efficiently handle these tasks with significantly smaller model sizes. We further support our theoretical findings through empirical experiments that explore the impact of varying numerical precision on arithmetic tasks, providing valuable insights for improving the mathematical reasoning capabilities of LLMs.
2023
FLatS: Principled Out-of-Distribution Detection with Feature-Based Likelihood Ratio Score
Haowei Lin
|
Yuntian Gu
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Detecting out-of-distribution (OOD) instances is crucial for NLP models in practical applications. Although numerous OOD detection methods exist, most of them are empirical. Backed by theoretical analysis, this paper advocates for the measurement of the “OOD-ness” of a test case x through the likelihood ratio between out-distribution Pout and in-distribution Pin. We argue that the state-of-the-art (SOTA) feature-based OOD detection methods, such as Maha and KNN, are suboptimal since they only estimate in-distribution density pin(x). To address this issue, we propose FLATS, a principled solution for OOD detection based on likelihood ratio. Moreover, we demonstrate that FLATS can serve as a general framework capable of enhancing other OOD detection methods by incorporating out-distribution density pout(x) estimation. Experiments show that FLATS establishes a new SOTA on popular benchmarks.