Dialogue & Discourse (2012)
Volumes
- Dialogue Discourse Volume 3 9 papers
up
Dialogue Discourse Volume 3
A Detailed Account of The First Question Generation Shared Task Evaluation Challenge
Vasile Rus | Brendan Wyse | Paul Piwek | Mihai Lintean | Svetlana Stoyanchev | Cristian Moldovan
Vasile Rus | Brendan Wyse | Paul Piwek | Mihai Lintean | Svetlana Stoyanchev | Cristian Moldovan
The paper provides a detailed account of the First Shared Task Evaluation Challenge on Question Generation that took place in 2010. The campaign included two tasks that take text as input and produce text, i.e. questions, as output: Task A - “ Question Generation from Paragraphs and Task B - “ Question Generation from Sentences. Motivation, data sets, evaluation criteria, guidelines for judges, and results are presented for the two tasks. Lessons learned and advice for future Question Generation Shared Task Evaluation Challenges (QG-STEC) are also offered.
Question Generation based on Lexico-Syntactic Patterns Learned from the Web
Sergio Curto | Ana Cristina Mendes | Luisa Coheur
Sergio Curto | Ana Cristina Mendes | Luisa Coheur
THE MENTOR automatically generates multiple-choice tests from a given text. This tool aims at supporting the dialogue system of the FalaComigo project, as one of FalaComigo’s goals is the interaction with tourists through questions/answers and quizzes about their visit. In a minimally supervised learning process and by leveraging the redundancy and linguistic variability of the Web, THE MENTOR learns lexico-syntactic patterns using a set of question/answer seeds. Afterward, these patterns are used to match the sentences from which new questions (and answers) can be generated. Finally, several ï¬lters are applied in order to discard low quality items. In this paper we detail the question generation task as performed by T- Mand evaluate its performance.
Creating Conversational Characters Using Question Generation Tools
Xuchen Yao | Emma Tosch | Grace Chen | Elnaz Nouri | Ron Artstein | Anton Leuski | Kenji Sagae | David Traum
Xuchen Yao | Emma Tosch | Grace Chen | Elnaz Nouri | Ron Artstein | Anton Leuski | Kenji Sagae | David Traum
This article describes a new tool for extracting question-answer pairs from text articles, and reports three experiments which investigate how suitable this technique is for supplying knowledge to conversational characters. Experiment 1 demonstrates the feasibility of our method by creating characters for 14 distinct topics and evaluating them using hand-authored questions. Experiment 2 evaluates three of these characters using questions collected from naive participants, showing that the generated characters provide full or partial answers to about half of the questions asked. Experiment 3 adds automatically extracted knowledge to an existing, hand-authored character, demonstrating that augmented characters can answer questions about new topics but with some degradation of the ability to answer questions about topics that the original character was trained to answer. Overall, the results show that question generation is a promising method for creating or augmenting a question answering conversational character using an existing text.
G-Asks: An Intelligent Automatic Question Generation System for Academic Writing Support
Ming Liu | Rafael A. Calvo | Vasile Rus
Ming Liu | Rafael A. Calvo | Vasile Rus
Many electronic feedback systems have been proposed for writing support. However, most of these systems only aim at supporting writing to communicate instead of writing to learn, as in the case of literature review writing. Trigger questions are potentially forms of support for writing to learn, but current automatic question generation approaches focus on factual question generation for reading comprehension or vocabulary assessment. This article presents a novel Automatic Question Generation (AQG) system, called G-Asks, which generates specific trigger questions as a form of support for students’ learning through writing. We conducted a large-scale case study, including 24 human supervisors and 33 research students, in an Engineering Research Method course at The University of Sydney and compared questions generated by G-Asks with human generated question. The results indicate that G-Asks can generate questions as useful as human supervisors (‘useful’ is one of five question quality measures) while significantly outperforming Human Peer and Generic Questions in most quality measures after filtering out questions with grammatical and semantic errors. Furthermore, we identified the most frequent question types, derived from the human supervisors’ questions and discussed how the human supervisors generate such questions from the source text.
In this paper we present a question generation approach suitable for tutorial dialogues. The approach is based on previous psychological theories that hypothesize questions are generated from a knowledge representation modeled as a concept map. Our model automatically extracts concept maps from a textbook and uses them to generate questions. The purpose of the study is to generate and evaluate pedagogically-appropriate questions at varying levels of specificity across one or more sentences. The evaluation metrics include scales from the Question Generation Shared Task and Evaluation Challenge and a new scale specific to the pedagogical nature of questions in tutoring.
Question Generation for French: Collating Parsers and Paraphrasing Questions
Delphine Bernhard | Louis de Viron | Véronique Moriceau | Xavier Tannier
Delphine Bernhard | Louis de Viron | Véronique Moriceau | Xavier Tannier
This article describes a question generation system for French. The transformation of declarative sentences into questions relies on two different syntactic parsers and named entity recognition tools. This makes it possible to further diversify the questions generated and to possibly alleviate the problems inherent to the analysis tools. The system also generates reformulations for the questions based on variations in the question words, inducing answers with different granularities, and nominalisations of action verbs. We evaluate the questions generated for sentences extracted from two different corpora: a corpus of newspaper articles used for the CLEF Question Answering evaluation campaign and a corpus of simplified online encyclopedia articles. The evaluation shows that the system is able to generate a majority of good and medium quality questions. We also present an original evaluation of the question generation system using the question analysis module of a question answering system.
This paper presents a question generation system based on the approach of semantic rewriting. The state-of-the-art deep linguistic parsing and generation tools are employed to convert (back and forth) between the natural language sentences and their meaning representations in the form of Minimal Recursion Semantics (MRS). By carefully operating on the semantic structures, we show a principled way of generating questions without ad-hoc manipulation of the syntactic structures. Based on the (partial) understanding of the sentence meaning, the system generates questions which are semantically grounded and purposeful. And with the support of deep linguistic grammars, the grammaticality of the generation results is warranted. Further, with a specialized ranking model, the linguistic realizations from the general purpose generation model are further refined for our the question generation task. The evaluation results from QGSTEC2010 show promising prospects of the proposed approach.
Varieties of Question Generation: Introduction to this Special Issue: Introduction to this Special Issue
Paul Piwek | Kristy Elizabeth Boyer
Paul Piwek | Kristy Elizabeth Boyer
Concept Type Prediction and Responsive Adaptation in a Dialogue System
Svetlana Stoyanchev | Amanda J. Stent
Svetlana Stoyanchev | Amanda J. Stent
Responsive adaptation in spoken dialog systems involves a change in dialog system behavior in response to a user or a dialog situation. In this paper we address responsive adaptation in the automatic speech recognition (ASR) module of a spoken dialog system. We hypothesize that information about the content of a user utterance may help improve speech recognition for the utterance. We use a two-step process to test this hypothesis: first, we automatically predict the task-relevant concept types likely to be present in a user utterance using features from the dialog context and from the output of first-pass ASR of the utterance; and then, we adapt the ASR’s language model to the predicted content of the user’s utterance and run a second pass of ASR. We show that: (1) it is possible to achieve high accuracy in determining presence or absence of particular concept types in a post-confirmation utterance; and (2) 2-pass speech recognition with concept type classification and language model adaptation can lead to improved speech recognition performance for post-confirmation utterances.