Pitfalls of Scale: Investigating the Inverse Task of Redefinition in Large Language Models

Elena Stringli, Maria Lymperaiou, Giorgos Filandrianos, Athanasios Voulodimos, Giorgos Stamou


Abstract
Inverse tasks can uncover potential reasoning gaps as Large Language Models (LLMs) scale up. In this work, we explore the redefinition task, in which we assign alternative values to well-known physical constants and units of measure, prompting LLMs to respond accordingly. Our findings show that not only does model performance degrade with scale, but its false confidence also rises. Moreover, while factors such as prompting strategies or response formatting are influential, they do not preclude LLMs from anchoring to memorized values.
Anthology ID:
2025.findings-acl.492
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
9445–9469
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-acl.492/
DOI:
Bibkey:
Cite (ACL):
Elena Stringli, Maria Lymperaiou, Giorgos Filandrianos, Athanasios Voulodimos, and Giorgos Stamou. 2025. Pitfalls of Scale: Investigating the Inverse Task of Redefinition in Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2025, pages 9445–9469, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Pitfalls of Scale: Investigating the Inverse Task of Redefinition in Large Language Models (Stringli et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-acl.492.pdf