BI-Bench : A Comprehensive Benchmark Dataset and Unsupervised Evaluation for BI Systems

Ankush Gupta, Aniya Aggarwal, Shivangi Bithel, Arvind Agarwal


Abstract
A comprehensive benchmark is crucial for evaluating automated Business Intelligence (BI) systems and their real-world effectiveness. We propose BI-Bench, a holistic, end-to-end benchmarking framework that assesses BI systems based on the quality, relevance, and depth of insights. It categorizes queries into descriptive, diagnostic, predictive, and prescriptive types, aligning with practical BI needs. Our fully automated approach enables custom benchmark generation tailored to specific datasets. Additionally, we introduce an automated evaluation mechanism within BI-Bench that removes reliance on strict ground truth, ensuring scalable and adaptable assessments. By addressing key limitations, it offers a flexible and robust, user-centered methodology for advancing next-generation BI systems.
Anthology ID:
2025.acl-industry.90
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Georg Rehm, Yunyao Li
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1287–1299
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.acl-industry.90/
DOI:
Bibkey:
Cite (ACL):
Ankush Gupta, Aniya Aggarwal, Shivangi Bithel, and Arvind Agarwal. 2025. BI-Bench : A Comprehensive Benchmark Dataset and Unsupervised Evaluation for BI Systems. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track), pages 1287–1299, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
BI-Bench : A Comprehensive Benchmark Dataset and Unsupervised Evaluation for BI Systems (Gupta et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.acl-industry.90.pdf