How Far Can LLMs Improve from Experience? Measuring Test-Time Learning Ability in LLMs with Human Comparison

Jiayin Wang, Zhiqiang Guo, Weizhi Ma, Min Zhang


Abstract
As evaluation designs of large language models may shape our trajectory toward artificial general intelligence, comprehensive and forward-looking assessment is essential. Existing benchmarks primarily assess static knowledge, while intelligence also entails the ability to rapidly learn from experience. To this end, we advocate for the evaluation of Test-time Learning, the capacity to improve performance in experience-based, reasoning-intensive tasks during test time. In this work, we propose semantic games as effective testbeds for evaluating test-time learning, due to their resistance to saturation and inherent demand for strategic reasoning. We introduce an objective evaluation framework that compares model performance under both limited and cumulative experience settings, and contains four forms of experience representation. To provide a comparative baseline, we recruit eight human participants to complete the same task. Results show that LLMs exhibit measurable test-time learning capabilities; however, their improvements are less stable under cumulative experience and progress more slowly than those observed in humans. These findings underscore the potential of LLMs as general-purpose learning machines, while also revealing a substantial intellectual gap between models and humans, irrespective of how well LLMs perform on static benchmarks. The code and data are available.
Anthology ID:
2025.emnlp-main.1304
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
25688–25702
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1304/
DOI:
Bibkey:
Cite (ACL):
Jiayin Wang, Zhiqiang Guo, Weizhi Ma, and Min Zhang. 2025. How Far Can LLMs Improve from Experience? Measuring Test-Time Learning Ability in LLMs with Human Comparison. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 25688–25702, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
How Far Can LLMs Improve from Experience? Measuring Test-Time Learning Ability in LLMs with Human Comparison (Wang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1304.pdf
Checklist:
 2025.emnlp-main.1304.checklist.pdf