Antonio Roque


2021

pdf
How Should Agents Ask Questions For Situated Learning? An Annotated Dialogue Corpus
Felix Gervits | Antonio Roque | Gordon Briggs | Matthias Scheutz | Matthew Marge
Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue

Intelligent agents that are confronted with novel concepts in situated environments will need to ask their human teammates questions to learn about the physical world. To better understand this problem, we need data about asking questions in situated task-based interactions. To this end, we present the Human-Robot Dialogue Learning (HuRDL) Corpus - a novel dialogue corpus collected in an online interactive virtual environment in which human participants play the role of a robot performing a collaborative tool-organization task. We describe the corpus data and a corresponding annotation scheme to offer insight into the form and content of questions that humans ask to facilitate learning in a situated environment. We provide the corpus as an empirically-grounded resource for improving question generation in situated intelligent agents.

2020

pdf
It’s About Time: Turn-Entry Timing For Situated Human-Robot Dialogue
Felix Gervits | Ravenna Thielstrom | Antonio Roque | Matthias Scheutz
Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue

Turn-entry timing is an important requirement for conversation, and one that spoken dialogue systems largely fail at. In this paper, we introduce a computational framework based on work from Psycholinguistics, which is aimed at achieving proper turn-taking timing for situated agents. The approach involves incremental processing and lexical prediction of the turn in progress, which allows a situated dialogue system to start its turn and initiate actions earlier than would otherwise be possible. We evaluate the framework by integrating it within a cognitive robotic architecture and testing performance on a corpus of task-oriented human-robot directives. We demonstrate that: 1) the system is superior to a non-incremental system in terms of faster responses, reduced gap between turns, and the ability to perform actions early, 2) the system can time its turn to come in immediately at a transition point or earlier to produce several types of overlap, and 3) the system is robust to various forms of disfluency in the input. Overall, this domain-independent framework can be integrated into various dialogue systems to improve responsiveness, and is a step toward more natural, human-like turn-taking behavior.

pdf
Reasoning Requirements for Indirect Speech Act Interpretation
Vasanth Sarathy | Alexander Tsuetaki | Antonio Roque | Matthias Scheutz
Proceedings of the 28th International Conference on Computational Linguistics

We perform a corpus analysis to develop a representation of the knowledge and reasoning used to interpret indirect speech acts. An indirect speech act (ISA) is an utterance whose intended meaning is different from its literal meaning. We focus on those speech acts in which slight changes in situational or contextual information can switch the dominant intended meaning of an utterance from direct to indirect or vice-versa. We computationalize how various contextual features can influence a speaker’s beliefs, and how these beliefs can influence the intended meaning and choice of the surface form of an utterance. We axiomatize the domain-general patterns of reasoning involved, and implement a proof-of-concept architecture using Answer Set Programming. Our model is presented as a contribution to cognitive science and psycholinguistics, so representational decisions are justified by existing theoretical work.

pdf
Developing a Corpus of Indirect Speech Act Schemas
Antonio Roque | Alexander Tsuetaki | Vasanth Sarathy | Matthias Scheutz
Proceedings of the Twelfth Language Resources and Evaluation Conference

Resolving Indirect Speech Acts (ISAs), in which the intended meaning of an utterance is not identical to its literal meaning, is essential to enabling the participation of intelligent systems in peoples’ everyday lives. Especially challenging are those cases in which the interpretation of such ISAs depends on context. To test a system’s ability to perform ISA resolution we need a corpus, but developing such a corpus is difficult, especialy given the contex-dependent requirement. This paper addresses the difficult problems of constructing a corpus of ISAs, taking inspiration from relevant work in using corpora for reasoning tasks. We present a formal representation of ISA Schemas required for such testing, including a measure of the difficulty of a particular schema. We develop an approach to authoring these schemas using corpus analysis and crowdsourcing, to maximize realism and minimize the amount of expert authoring needed. Finally, we describe several characteristics of collected data, and potential future work.

pdf
Generating Explanations of Action Failures in a Cognitive Robotic Architecture
Ravenna Thielstrom | Antonio Roque | Meia Chita-Tegmark | Matthias Scheutz
2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence

We describe an approach to generating explanations about why robot actions fail, focusing on the considerations of robots that are run by cognitive robotic architectures. We define a set of Failure Types and Explanation Templates, motivating them by the needs and constraints of cognitive architectures that use action scripts and interpretable belief states, and describe content realization and surface realization in this context. We then describe an evaluation that can be extended to further study the effects of varying the explanation templates.

2019

pdf
Engaging in Dialogue about an Agent’s Norms and Behaviors
Daniel Kasenberg | Antonio Roque | Ravenna Thielstrom | Matthias Scheutz
Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019)

pdf
Generating justifications for norm-related agent decisions
Daniel Kasenberg | Antonio Roque | Ravenna Thielstrom | Meia Chita-Tegmark | Matthias Scheutz
Proceedings of the 12th International Conference on Natural Language Generation

We present an approach to generating natural language justifications of decisions derived from norm-based reasoning. Assuming an agent which maximally satisfies a set of rules specified in an object-oriented temporal logic, the user can ask factual questions (about the agent’s rules, actions, and the extent to which the agent violated the rules) as well as “why” questions that require the agent comparing actual behavior to counterfactual trajectories with respect to these rules. To produce natural-sounding explanations, we focus on the subproblem of producing natural language clauses from statements in a fragment of temporal logic, and then describe how to embed these clauses into explanatory sentences. We use a human judgment evaluation on a testbed task to compare our approach to variants in terms of intelligibility, mental model and perceived trust.

2012

pdf
Towards a computational approach to literary text analysis
Antonio Roque
Proceedings of the NAACL-HLT 2012 Workshop on Computational Linguistics for Literature

2010

pdf
Dialogues in Context: An Objective User-Oriented Evaluation Approach for Virtual Human Dialogue
Susan Robinson | Antonio Roque | David Traum
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

As conversational agents are now being developed to encounter more complex dialogue situations it is increasingly difficult to find satisfactory methods for evaluating these agents. Task-based measures are insufficient where there is no clearly defined task. While user-based evaluation methods may give a general sense of the quality of an agent's performance, they shed little light on the relative quality or success of specific features of dialogue that are necessary for system improvement. This paper examines current dialogue agent evaluation practices and motivates the need for a more detailed approach for defining and measuring the quality of dialogues between agent and user. We present a framework for evaluating the dialogue competence of artificial agents involved in complex and underspecified tasks when conversing with people. A multi-part coding scheme is proposed that provides a qualitative analysis of human utterances, and rates the appropriateness of the agent's responses to these utterances. The scheme is outlined, and then used to evaluate Staff Duty Officer Moleno, a virtual guide in Second Life.

2008

pdf
Degrees of Grounding Based on Evidence of Understanding
Antonio Roque | David Traum
Proceedings of the 9th SIGdial Workshop on Discourse and Dialogue

2007

pdf
A Model of Compliance and Emotion for Potentially Adversarial Dialogue Agents
Antonio Roque | David Traum
Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue

pdf
Hassan: A Virtual Human for Tactical Questioning
David Traum | Antonio Roque | Anton Leuski | Panayiotis Georgiou | Jillian Gerten | Bilyana Martinovski | Shrikanth Narayanan | Susan Robinson | Ashish Vaswani
Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue

2006

pdf
An Information State-Based Dialogue Manager for Call for Fire Dialogues
Antonio Roque | David Traum
Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue

2003

pdf
A Hybrid Text Classification Approach for Analysis of Student Essays
Carolyn P. Rosé | Antonio Roque | Dumisizwe Bhembe | Kurt VanLehn
Proceedings of the HLT-NAACL 03 Workshop on Building Educational Applications Using Natural Language Processing

pdf
A Hybrid Approach to Content Analysis for Automatic Essay Grading
Carolyn P. Rose | Antonio Roque | Dumisizwe Bhembe | Kurt VanLehn
Companion Volume of the Proceedings of HLT-NAACL 2003 - Short Papers