Large Language Models for Predictive Analysis: How Far Are They?

Qin Chen, Yuanyi Ren, Xiaojun Ma, Yuyang Shi


Abstract
Predictive analysis is a cornerstone of modern decision-making, with applications in various domains. Large Language Models (LLMs) have emerged as powerful tools in enabling nuanced, knowledge-intensive conversations, thus aiding in complex decision-making tasks. With the burgeoning expectation to harness LLMs for predictive analysis, there is an urgent need to systematically assess their capability in this domain. However, there are no relevant evaluations in existing studies. To bridge this gap, we introduce the PredictiQ benchmark, which integrates 1130 sophisticated predictive analysis queries originating from 44 real-world datasets of 8 diverse fields. We design an evaluation protocol considering text analysis, code generation, and their alignment. Twelve renowned LLMs are evaluated, offering insights into their practical use in predictive analysis.
Anthology ID:
2025.findings-acl.416
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7961–7978
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-acl.416/
DOI:
Bibkey:
Cite (ACL):
Qin Chen, Yuanyi Ren, Xiaojun Ma, and Yuyang Shi. 2025. Large Language Models for Predictive Analysis: How Far Are They?. In Findings of the Association for Computational Linguistics: ACL 2025, pages 7961–7978, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Large Language Models for Predictive Analysis: How Far Are They? (Chen et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-acl.416.pdf