Table-LLM-Specialist: Language Model Specialists for Tables using Iterative Fine-tuning
Junjie Xing, Yeye He, Mengyu Zhou, Haoyu Dong, Shi Han, Dongmei Zhang, Surajit Chaudhuri
Abstract
Language models such as GPT and Llama have shown remarkable ability on diverse natural language tasks, yet their performance on complex table tasks (e.g., NL-to-Code, data cleaning, etc.) continues to be suboptimal. To improve their performance, task-specific fine-tuning is often needed, which, however, require expensive human labeling and is prone to over-fitting.In this work, we propose Table-Specialist, a new self-trained fine-tuning paradigm specifically designed for table tasks. Our insight is that for each table task, there often exist two dual versions of the same task, one generative and one classification in nature. Leveraging their duality, we propose a Generator-Validator paradigm to iteratively generate-then-validate training data from language models, to fine-tune stronger Table-Specialist models that can specialize in a given task, without using manually-labeled data.Extensive evaluations of Table-Specialist on Llama, GPT-3.5 and GPT-4 suggest that our Table-Specialist has (1) **strong performance** on diverse table tasks over vanilla language-models – for example, Table-Specialist fine-tuned on GPT-3.5 not only outperforms vanilla GPT-3.5, but can often match or surpass GPT-4 level quality, (2) **lower cost** to deploy, because when Table-Specialist fine-tuned on GPT-3.5 achieve GPT-4 level quality, it becomes possible to deploy smaller models with lower latency/cost at comparable quality, and (3) **better generalizability** when evaluated across multiple benchmarks, since Table-Specialist is fine-tuned on a broad range of training data systematically generated from diverse real tables. Our code is available at [microsoft/Table-Specialist](https://github.com/microsoft/Table-Specialist). Specialist models fine-tuned using Table-Specialist have been integrated into Microsoft Excel for use cases such as automated table data cleaning.- Anthology ID:
- 2025.emnlp-main.1795
- Volume:
- Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
- Month:
- November
- Year:
- 2025
- Address:
- Suzhou, China
- Editors:
- Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 35431–35448
- Language:
- URL:
- https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1795/
- DOI:
- Cite (ACL):
- Junjie Xing, Yeye He, Mengyu Zhou, Haoyu Dong, Shi Han, Dongmei Zhang, and Surajit Chaudhuri. 2025. Table-LLM-Specialist: Language Model Specialists for Tables using Iterative Fine-tuning. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 35431–35448, Suzhou, China. Association for Computational Linguistics.
- Cite (Informal):
- Table-LLM-Specialist: Language Model Specialists for Tables using Iterative Fine-tuning (Xing et al., EMNLP 2025)
- PDF:
- https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1795.pdf