The Lookahead Limitation: Why Multi-Operand Addition is Hard for LLMs

Tanja Baeumel, Josef Van Genabith, Simon Ostermann


Abstract
Autoregressive large language models (LLMs) exhibit impressive performance across various tasks but struggle with simple arithmetic, such as additions of two or more operands. We show that this struggle arises from LLMs’ use of a simple one-digit lookahead heuristic, which forms an upper bound for LLM performance and accounts for characteristic error patterns in two-operand addition and failure in multi-operand addition, where the carry-over logic is more complex. Our probing experiments and digit-wise accuracy evaluation show that the evaluated LLMs fail precisely where a one-digit lookahead is insufficient to account for cascading carries. We analyze the impact of tokenization strategies on arithmetic performance and show that all investigated models, regardless of tokenization and size, are inherently limited in the addition of multiple operands due to their reliance on a one-digit lookahead heuristic. Our findings reveal limitations that prevent LLMs from generalizing to more complex numerical reasoning.
Anthology ID:
2025.blackboxnlp-1.15
Volume:
Proceedings of the 8th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Yonatan Belinkov, Aaron Mueller, Najoung Kim, Hosein Mohebbi, Hanjie Chen, Dana Arad, Gabriele Sarti
Venues:
BlackboxNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
250–262
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.blackboxnlp-1.15/
DOI:
Bibkey:
Cite (ACL):
Tanja Baeumel, Josef Van Genabith, and Simon Ostermann. 2025. The Lookahead Limitation: Why Multi-Operand Addition is Hard for LLMs. In Proceedings of the 8th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 250–262, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
The Lookahead Limitation: Why Multi-Operand Addition is Hard for LLMs (Baeumel et al., BlackboxNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.blackboxnlp-1.15.pdf