Xusen Yin
2021
Summary-Oriented Question Generation for Informational Queries
Xusen Yin
|
Li Zhou
|
Kevin Small
|
Jonathan May
Proceedings of the 1st Workshop on Document-grounded Dialogue and Conversational Question Answering (DialDoc 2021)
Users frequently ask simple factoid questions for question answering (QA) systems, attenuating the impact of myriad recent works that support more complex questions. Prompting users with automatically generated suggested questions (SQs) can improve user understanding of QA system capabilities and thus facilitate more effective use. We aim to produce self-explanatory questions that focus on main document topics and are answerable with variable length passages as appropriate. We satisfy these requirements by using a BERT-based Pointer-Generator Network trained on the Natural Questions (NQ) dataset. Our model shows SOTA performance of SQ generation on the NQ dataset (20.1 BLEU-4). We further apply our model on out-of-domain news articles, evaluating with a QA system due to the lack of gold questions and demonstrate that our model produces better SQs for news articles – with further confirmation via a human evaluation.
2020
Learning to Generalize for Sequential Decision Making
Xusen Yin
|
Ralph Weischedel
|
Jonathan May
Findings of the Association for Computational Linguistics: EMNLP 2020
We consider problems of making sequences of decisions to accomplish tasks, interacting via the medium of language. These problems are often tackled with reinforcement learning approaches. We find that these models do not generalize well when applied to novel task domains. However, the large amount of computation necessary to adequately train and explore the search space of sequential decision making, under a reinforcement learning paradigm, precludes the inclusion of large contextualized language models, which might otherwise enable the desired generalization ability. We introduce a teacher-student imitation learning methodology and a means of converting a reinforcement learning model into a natural language understanding model. Together, these methodologies enable the introduction of contextualized language models into the sequential decision making problem space. We show that models can learn faster and generalize more, leveraging both the imitation learning and the reformulation. Our models exceed teacher performance on various held-out decision problems, by up to 7% on in-domain problems and 24% on out-of-domain problems.
2017
SHIHbot: A Facebook chatbot for Sexual Health Information on HIV/AIDS
Jacqueline Brixey
|
Rens Hoegen
|
Wei Lan
|
Joshua Rusow
|
Karan Singla
|
Xusen Yin
|
Ron Artstein
|
Anton Leuski
Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue
We present the implementation of an autonomous chatbot, SHIHbot, deployed on Facebook, which answers a wide variety of sexual health questions on HIV/AIDS. The chatbot’s response database is com-piled from professional medical and public health resources in order to provide reliable information to users. The system’s backend is NPCEditor, a response selection platform trained on linked questions and answers; to our knowledge this is the first retrieval-based chatbot deployed on a large public social network.
Search
Co-authors
- Jonathan May 2
- Jacqueline Brixey 1
- Rens Hoegen 1
- Wei Lan 1
- Joshua Rusow 1
- show all...