Michimasa Inaba


2019

pdf bib
Proceedings of the 1st International Workshop of AI Werewolf and Dialog System (AIWolfDial2019)
Yoshinobu Kano | Claus Aranha | Michimasa Inaba | Fujio Toriumi | Hirotaka Osawa | Daisuke Katagami | Takashi Otsuki
Proceedings of the 1st International Workshop of AI Werewolf and Dialog System (AIWolfDial2019)

pdf bib
Overview of AIWolfDial 2019 Shared Task: Contest of Automatic Dialog Agents to Play the Werewolf Game through Conversations
Yoshinobu Kano | Claus Aranha | Michimasa Inaba | Fujio Toriumi | Hirotaka Osawa | Daisuke Katagami | Takashi Otsuki | Issei Tsunoda | Shoji Nagayama | Dolça Tellols | Yu Sugawara | Yohei Nakata
Proceedings of the 1st International Workshop of AI Werewolf and Dialog System (AIWolfDial2019)

2018

pdf bib
Estimating User Interest from Open-Domain Dialogue
Michimasa Inaba | Kenichi Takahashi
Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue

Dialogue personalization is an important issue in the field of open-domain chat-oriented dialogue systems. If these systems could consider their users’ interests, user engagement and satisfaction would be greatly improved. This paper proposes a neural network-based method for estimating users’ interests from their utterances in chat dialogues to personalize dialogue systems’ responses. We introduce a method for effectively extracting topics and user interests from utterances and also propose a pre-training approach that increases learning efficiency. Our experimental results indicate that the proposed model can estimate user’s interest more accurately than baseline approaches.

2016

pdf bib
Neural Utterance Ranking Model for Conversational Dialogue Systems
Michimasa Inaba | Kenichi Takahashi
Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue

pdf bib
The dialogue breakdown detection challenge: Task description, datasets, and evaluation metrics
Ryuichiro Higashinaka | Kotaro Funakoshi | Yuka Kobayashi | Michimasa Inaba
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

Dialogue breakdown detection is a promising technique in dialogue systems. To promote the research and development of such a technique, we organized a dialogue breakdown detection challenge where the task is to detect a system’s inappropriate utterances that lead to dialogue breakdowns in chat. This paper describes the design, datasets, and evaluation metrics for the challenge as well as the methods and results of the submitted runs of the participants.