Jiaming Tian
2025
LongTableBench: Benchmarking Long-Context Table Reasoning across Real-World Formats and Domains
Liyao Li
|
Jiaming Tian
|
Hao Chen
|
Wentao Ye
|
Chao Ye
|
Haobo Wang
|
Ningtao Wang
|
Xing Fu
|
Gang Chen
|
Junbo Zhao
Findings of the Association for Computational Linguistics: EMNLP 2025
We introduce **LongTableBench**, a benchmark for evaluating long-context reasoning over semi-structured tables across diverse formats, tasks, and domains. It comprises 5,950 QA instances spanning 7 table formats (e.g., Markdown, HTML, SQL), 18 domains, and input lengths up to 128K tokens, including multi-turn and multi-table settings. To ensure data quality, we combine symbolic supervision, cross-model validation, and human review. Evaluating 52 LLMs—including general-purpose, table-specific, and reasoning-enhanced models—reveals that only the strongest models maintain robust performance under increasing context lengths and format diversity. We further show that end-to-end models outperform compression-based approaches, especially on tasks requiring semantic integration. LongTableBench provides a rigorous, scalable testbed for advancing long-context tabular understanding and highlights key limitations in current LLMs’ structural and reasoning capabilities.
Search
Fix author
Co-authors
- Hao Chen (陈昊) 1
- Gang Chen 1
- Xing Fu 1
- Liyao Li 1
- Haobo Wang 1
- show all...