Carlos Gemmell
2023
ToolWriter: Question Specific Tool Synthesis for Tabular Data
Carlos Gemmell
|
Jeff Dalton
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Tabular question answering (TQA) presents a challenging setting for neural systems by requiring joint reasoning of natural language with large amounts of semi-structured data. Unlike humans who use programmatic tools like filters to transform data before processing, language models in TQA process tables directly, resulting in information loss as table size increases. In this paper we propose ToolWriter to generate query specific programs and detect when to apply them to transform tables and align them with the TQA model’s capabilities. Focusing Toolwriter to generate row-filtering tools improves the state-of-the-art for WikiTableQuestions and WikiSQL with the most performance gained on long tables. By investigating headroom, our work highlights the broader potential for programmatic tools combined with neural components to manipulate large amounts of structured data.
2022
GRILLBot: A multi-modal conversational agent for complex real-world tasks
Carlos Gemmell
|
Federico Rossetto
|
Iain Mackie
|
Paul Owoicho
|
Sophie Fischer
|
Jeff Dalton
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
We present GRILLBot, an open-source multi-modal task-oriented voice assistant to help users perform complex tasks, focusing on the domains of cooking and home improvement. GRILLBot curates and leverages web information extraction to build coverage over a broad range of tasks for which a user can receive guidance. To represent each task, we propose TaskGraphs as a dynamic graph unifying steps, requirements, and curated domain knowledge enabling contextual question answering, and detailed explanations. Multi-modal elements play a key role in GRILLBot both helping the user navigate through the task and enriching the experience with helpful videos and images that are automatically linked throughout the task. We leverage a contextual neural semantic parser to enable flexible navigation when interacting with the system by jointly encoding stateful information with the conversation history. GRILLBot enables dynamic and adaptable task planning and assistance for complex tasks by combining elements of task representations that incorporate text and structure, combined with neural models for search, question answering, and dialogue state management. GRILLBot competed in the Alexa prize TaskBot Challenge as one of the finalists.
Induced Natural Language Rationales and Interleaved Markup Tokens Enable Extrapolation in Large Language Models
Mirelle Candida Bueno
|
Carlos Gemmell
|
Jeff Dalton
|
Roberto Lotufo
|
Rodrigo Nogueira
Proceedings of the 1st Workshop on Mathematical Natural Language Processing (MathNLP)
The ability to extrapolate, i.e., to make predictions on sequences that are longer than those presented as training examples, is a challenging problem for current deep learning models. Recent work shows that this limitation persists in state-of-the-art Transformer-based models. Most solutions to this problem use specific architectures or training methods that do not generalize to other tasks. We demonstrate that large language models can succeed in extrapolation without modifying their architecture or training procedure. Our experimental results show that generating step-by-step rationales and introducing marker tokens are both required for effective extrapolation. First, we induce a language model to produce step-by-step rationales before outputting the answer to effectively communicate the task to the model. However, as sequences become longer, we find that current models struggle to keep track of token positions. To address this issue, we interleave output tokens with markup tokens that act as explicit positional and counting symbols. Our findings show how these two complementary approaches enable remarkable sequence extrapolation and highlight a limitation of current architectures to effectively generalize without explicit surface form guidance. Code available at https://anonymous.4open.science/r/induced-rationales-markup-tokens-0650/README.md
Search
Co-authors
- Jeff Dalton 3
- Federico Rossetto 1
- Iain Mackie 1
- Paul Owoicho 1
- Sophie Fischer 1
- show all...