Pierre-Emmanuel Mazare
Also published as: Pierre-Emmanuel Mazaré
2019
Learning from Dialogue after Deployment: Feed Yourself, Chatbot!
Braden Hancock
|
Antoine Bordes
|
Pierre-Emmanuel Mazare
|
Jason Weston
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
The majority of conversations a dialogue agent sees over its lifetime occur after it has already been trained and deployed, leaving a vast store of potential training signal untapped. In this work, we propose the self-feeding chatbot, a dialogue agent with the ability to extract new training examples from the conversations it participates in. As our agent engages in conversation, it also estimates user satisfaction in its responses. When the conversation appears to be going well, the user’s responses become new training examples to imitate. When the agent believes it has made a mistake, it asks for feedback; learning to predict the feedback that will be given improves the chatbot’s dialogue abilities further. On the PersonaChat chit-chat dataset with over 131k training examples, we find that learning from dialogue with a self-feeding chatbot significantly improves performance, regardless of the amount of traditional supervision.
2018
Training Millions of Personalized Dialogue Agents
Pierre-Emmanuel Mazaré
|
Samuel Humeau
|
Martin Raison
|
Antoine Bordes
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Current dialogue systems fail at being engaging for users, especially when trained end-to-end without relying on proactive reengaging scripted strategies. Zhang et al. (2018) showed that the engagement level of end-to-end dialogue models increases when conditioning them on text personas providing some personalized back-story to the model. However, the dataset used in Zhang et al. (2018) is synthetic and only contains around 1k different personas. In this paper we introduce a new dataset providing 5 million personas and 700 million persona-based dialogues. Our experiments show that, at this scale, training using personas still improves the performance of end-to-end systems. In addition, we show that other tasks benefit from the wide coverage of our dataset by fine-tuning our model on the data from Zhang et al. (2018) and achieving state-of-the-art results.
Reference-less Quality Estimation of Text Simplification Systems
Louis Martin
|
Samuel Humeau
|
Pierre-Emmanuel Mazaré
|
Éric de La Clergerie
|
Antoine Bordes
|
Benoît Sagot
Proceedings of the 1st Workshop on Automatic Text Adaptation (ATA)
Search
Co-authors
- Antoine Bordes 3
- Samuel Humeau 2
- Martin Raison 1
- Louis Martin 1
- Éric Villemonte De La Clergerie 1
- show all...