Yanjie Fu
2025
Weaver: Interweaving SQL and LLM for Table Reasoning
Rohit Khoja
|
Devanshu Gupta
|
Yanjie Fu
|
Dan Roth
|
Vivek Gupta
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Querying tables with unstructured data is challenging due to the presence of text (or image), either embedded in the table or in external paragraphs, which traditional SQL struggles to process, especially for tasks requiring semantic reasoning. While Large Language Models (LLMs) excel at understanding context, they face limitations with long input sequences. Existing approaches that combine SQL and LLM typically rely on rigid, predefined workflows, limiting their adaptability to complex queries. To address these issues, we introduce Weaver, a modular pipeline that dynamically integrates SQL and LLM for table-based question answering (Table QA). Weaver generates a flexible, step-by-step plan that combines SQL for structured data retrieval with LLMs for semantic processing. By decomposing complex queries into manageable subtasks, Weaver improves accuracy and generalization. Our experiments show that consistently outperforms state-of-the-art methods across four Table QA datasets, reducing both API calls and error rates.
MixLLM: Dynamic Routing in Mixed Large Language Models
Xinyuan Wang
|
Yanchi Liu
|
Wei Cheng
|
Xujiang Zhao
|
Zhengzhang Chen
|
Wenchao Yu
|
Yanjie Fu
|
Haifeng Chen
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Large Language Models (LLMs) exhibit potential artificial generic intelligence recently, however, their usage is costly with high response latency. Given mixed LLMs with their own strengths and weaknesses, LLM routing aims to identify the most suitable model for each query in the stream to maximize response quality and minimize cost and latency. However, the challenges involve: (1) dynamic trade-offs among quality, cost, and latency; (2) enabling continual learning in deployed systems; and (3) navigating a varying (e.g., new LLM addition or old LLM removal) set of LLM candidates over time. To bridge these gaps, we develop MixLLM, a dynamic contextual-bandit-based routing system for query-LLM assignment. Specifically, we first leverage query tags to enhance query embeddings for the routing task. Next, we design lightweight prediction models to estimate the response qualities and costs of queries over LLMs. We then devise a meta-decision maker to choose the query-LLM assignments to best tradeoff response quality, cost, and latency. Finally, the system benefits from continual training, allowing it to adapt to evolving queries and user feedback over time. Our extensive experiments show that MixLLM achieves the best trade-offs in response quality, cost, and latency (97.25% of GPT-4’s quality at 24.18% of the cost under the time constraint).
Search
Fix author
Co-authors
- Zhengzhang Chen 1
- Haifeng Chen 1
- Wei Cheng 1
- Devanshu Gupta 1
- Vivek Gupta 1
- show all...