TableWise at SemEval-2025 Task 8: LLM Agents for TabQA
Harsh Bansal, Aman Raj, Akshit Sharma, Parameswari Krishnamurthy
Abstract
Tabular Question Answering (TabQA) is a challenging task that requires models to comprehend structured tabular data and generate accurate responses based on complex reasoning. In this paper, we present our approach to SemEval Task 8: Tabular Question Answering, where we develop a large language model (LLM)-based agent capable of understanding and reasoning over tabular inputs. Our agent leverages a hybrid retrieval and generation strategy, incorporating structured table parsing, semantic understanding, and reasoning mechanisms to enhance response accuracy. We fine-tune a pre-trained LLM on domain-specific tabular data, integrating chain-of-thought prompting and adaptive decoding to improve multi-hop reasoning over tables. Experimental results demonstrate that our approach achieves competitive performance, effectively handling numerical operations, entity linking, and logical inference. Our findings suggest that LLM-based agents, when properly adapted, can significantly advance the state of the art in tabular question answering.- Anthology ID:
- 2025.semeval-1.87
- Volume:
- Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria
- Editors:
- Sara Rosenthal, Aiala Rosá, Debanjan Ghosh, Marcos Zampieri
- Venues:
- SemEval | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 623–626
- Language:
- URL:
- https://preview.aclanthology.org/corrections-2025-08/2025.semeval-1.87/
- DOI:
- Cite (ACL):
- Harsh Bansal, Aman Raj, Akshit Sharma, and Parameswari Krishnamurthy. 2025. TableWise at SemEval-2025 Task 8: LLM Agents for TabQA. In Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025), pages 623–626, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- TableWise at SemEval-2025 Task 8: LLM Agents for TabQA (Bansal et al., SemEval 2025)
- PDF:
- https://preview.aclanthology.org/corrections-2025-08/2025.semeval-1.87.pdf