Kangli Xu
2025
TableLLM: Enabling Tabular Data Manipulation by LLMs in Real Office Usage Scenarios
Xiaokang Zhang
|
Sijia Luo
|
Bohan Zhang
|
Zeyao Ma
|
Jing Zhang
|
Yang Li
|
Guanlin Li
|
Zijun Yao
|
Kangli Xu
|
Jinchang Zhou
|
Daniel Zhang-Li
|
Jifan Yu
|
Shu Zhao
|
Juanzi Li
|
Jie Tang
Findings of the Association for Computational Linguistics: ACL 2025
We introduce TableLLM, a robust large language model (LLM) with 8 billion parameters, purpose-built for proficiently handling tabular data manipulation tasks, whether they are embedded within documents or spreadsheets, catering to real-world office scenarios. We propose a distant supervision method for training, which comprises a reasoning process extension strategy, aiding in training LLMs to understand reasoning patterns more effectively as well as a cross-way validation strategy, ensuring the quality of the automatically generated data. To evaluate the performance of TableLLM, we have crafted benchmarks tailored to address both document and spreadsheet formats as well as constructed a well-organized evaluation pipeline capable of handling both scenarios. Thorough evaluations underscore the advantages of TableLLM when compared to various existing general-purpose and tabular data-focused LLMs. We have publicly released the model checkpoint, source code, benchmarks, and a web application for user interaction on this anonymized repository.
Search
Fix author
Co-authors
- Yang Li (李旸) 1
- Guanlin Li 1
- Juanzi Li 1
- Sijia Luo 1
- Zeyao Ma 1
- show all...