Simon Keizer
2026
Context-Aware Language Understanding in Human-Robot Dialogue with LLMs
Svetlana Stoyanchev | Youmna Farag | Simon Keizer | Mohan Li | Rama Sanand Doddipatla
Proceedings of the 16th International Workshop on Spoken Dialogue System Technology
Svetlana Stoyanchev | Youmna Farag | Simon Keizer | Mohan Li | Rama Sanand Doddipatla
Proceedings of the 16th International Workshop on Spoken Dialogue System Technology
In this work, we explore the use of large language models (LLMs) as interpreters of user utterances within a human-robot language interface. A user interacting with a robot that operates in a physical environment should be able to issue commands that interrupt the robot’s actions, for example, corrections or refinement of the task. This study addresses the context-aware interpretation of user utterances, including those issued while the robot is actively engaged in task execution, exploring whether LLMs, without fine-tuning, can translate user commands into corresponding sequences of robot actions. Using an interactive multimodal interface—combining text and video—for a virtual robot operating in simulated home environments, we collect a dataset of user utterances that guide the robot through various household tasks simultaneously capturing manual interpretation when the automatic one fails. Driven by practical considerations, the collected dataset is used to compare the interpretive performance of GPT models with smaller publicly available alternatives. Our findings reveal that action-interrupting utterances pose challenges for all models. While GPT consistently outperforms the smaller models, interpretation accuracy improves across the board when relevant dynamically selected in-context learning examples are included in the prompt.
2025
Conditional Multi-Stage Failure Recovery for Embodied Agents
Youmna Farag | Svetlana Stoyanchev | Mohan Li | Simon Keizer | Rama Doddipatla
Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025)
Youmna Farag | Svetlana Stoyanchev | Mohan Li | Simon Keizer | Rama Doddipatla
Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025)
Embodied agents performing complex tasks are susceptible to execution failures, motivating the need for effective failure recovery mechanisms. In this work, we introduce a conditional multi-stage failure recovery framework that employs zero-shot chain prompting. The framework is structured into four error-handling stages, with three operating during task execution and one functioning as a post-execution reflection phase.Our approach utilises the reasoning capabilities of LLMs to analyse execution challenges within their environmental context and devise strategic solutions.We evaluate our method on the TfD benchmark of the TEACH dataset and achieve state-of-the-art performance, outperforming a baseline without error recovery by 11.5% and surpassing the strongest existing model by 19%.
2023
Evaluating Large Language Models for Document-grounded Response Generation in Information-Seeking Dialogues
Norbert Braunschweiler | Rama Doddipatla | Simon Keizer | Svetlana Stoyanchev
Proceedings of the 1st Workshop on Taming Large Language Models: Controllability in the era of Interactive Assistants!
Norbert Braunschweiler | Rama Doddipatla | Simon Keizer | Svetlana Stoyanchev
Proceedings of the 1st Workshop on Taming Large Language Models: Controllability in the era of Interactive Assistants!
In this paper, we investigate the use of large language models (LLMs) like ChatGPT for document-grounded response generation in the context of information-seeking dialogues. For evaluation, we use the MultiDoc2Dial corpus of task-oriented dialogues in four social service domains previously used in the DialDoc 2022 Shared Task. Information-seeking dialogue turns are grounded in multiple documents providing relevant information. We generate dialogue completion responses by prompting a ChatGPT model, using two methods: Chat-Completion and LlamaIndex. ChatCompletion uses knowledge from ChatGPT model pre-training while LlamaIndex also extracts relevant information from documents. Observing that document-grounded response generation via LLMs cannot be adequately assessed by automatic evaluation metrics as they are significantly more verbose, we perform a human evaluation where annotators rate the output of the shared task winning system, the two ChatGPT variants outputs, and human responses. While both ChatGPT variants are more likely to include information not present in the relevant segments, possibly including a presence of hallucinations, they are rated higher than both the shared task winning system and human responses.
2022
Combining Structured and Unstructured Knowledge in an Interactive Search Dialogue System
Svetlana Stoyanchev | Suraj Pandey | Simon Keizer | Norbert Braunschweiler | Rama Sanand Doddipatla
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
Svetlana Stoyanchev | Suraj Pandey | Simon Keizer | Norbert Braunschweiler | Rama Sanand Doddipatla
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue
Users of interactive search dialogue systems specify their preferences with natural language utterances. However, a schema-driven system is limited to handling the preferences that correspond to the predefined database content. In this work, we present a methodology for extending a schema-driven interactive search dialogue system with the ability to handle unconstrained user preferences. Using unsupervised semantic similarity metrics and the text snippets associated with the search items, the system identifies suitable items for the user’s unconstrained natural language query. In crowd-sourced evaluation, the users chat with our extended restaurant search system. Based on objective metrics and subjective user ratings, we demonstrate the feasibility of using an unsupervised low latency approach to extend a schema-driven search dialogue system to handle unconstrained user preferences.
2020
The ISO Standard for Dialogue Act Annotation, Second Edition
Harry Bunt | Volha Petukhova | Emer Gilmartin | Catherine Pelachaud | Alex Fang | Simon Keizer | Laurent Prévot
Proceedings of the Twelfth Language Resources and Evaluation Conference
Harry Bunt | Volha Petukhova | Emer Gilmartin | Catherine Pelachaud | Alex Fang | Simon Keizer | Laurent Prévot
Proceedings of the Twelfth Language Resources and Evaluation Conference
ISO standard 24617-2 for dialogue act annotation, established in 2012, has in the past few years been used both in corpus annotation and in the design of components for spoken and multimodal dialogue systems. This has brought some inaccuracies and undesirbale limitations of the standard to light, which are addressed in a proposed second edition. This second edition allows a more accurate annotation of dependence relations and rhetorical relations in dialogue. Following the ISO 24617-4 principles of semantic annotation, and borrowing ideas from EmotionML, a triple-layered plug-in mechanism is introduced which allows dialogue act descriptions to be enriched with information about their semantic content, about accompanying emotions, and other information, and allows the annotation scheme to be customised by adding application-specific dialogue act types.
2007
Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue
Harry Bunt | Simon Keizer | Tim Paek
Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue
Harry Bunt | Simon Keizer | Tim Paek
Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue
Search
Fix author
Co-authors
- Harry Bunt 6
- Blaise Thomson 6
- Steve Young 6
- Kai Yu 6
- Milica Gasic 5
- François Mairesse 5
- Filip Jurcicek 4
- Oliver Lemon 4
- Svetlana Stoyanchev 4
- Norbert Braunschweiler 2
- Rama Sanand Doddipatla 2
- Rama Doddipatla 2
- Youmna Farag 2
- Emer Gilmartin 2
- Mohan Li 2
- Xingkun Liu 2
- Roser Morante 2
- Catherine Pelachaud 2
- Volha Petukhova 2
- Laurent Prévot 2
- Verena Rieser 2
- Alan W. Black 1
- Susanne Burger 1
- Alistair Conkie 1
- Heriberto Cuayáhuitl 1
- Mihai Dobre 1
- Ondřej Dušek 1
- Ioannis Efstathiou 1
- Klaus-Peter Engelbrecht 1
- Maxine Eskenazi 1
- Alex Fang 1
- Mary Ellen Foster 1
- Andre Gaschler 1
- Manuel Giuliani 1
- Markus Guhe 1
- Helen Hastie 1
- Alex Lascarides 1
- Fabrice Lefèvre 1
- Nicolas Merigaud 1
- Anton Nijholt 1
- Tim Paek 1
- Suraj Pandey 1
- Gabriel Parent 1
- Jost Schatzmann 1
- Gabriel Schubiner 1
- Mariët Theune 1
- Jason D. Williams 1
- Rieks op den Akker 1