Adversarial evaluation for open-domain dialogue generation

Elia Bruni, Raquel Fernández


Abstract
We investigate the potential of adversarial evaluation methods for open-domain dialogue generation systems, comparing the performance of a discriminative agent to that of humans on the same task. Our results show that the task is hard, both for automated models and humans, but that a discriminative agent can learn patterns that lead to above-chance performance.
Anthology ID:
W17-5534
Volume:
Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue
Month:
August
Year:
2017
Address:
Saarbrücken, Germany
Editors:
Kristiina Jokinen, Manfred Stede, David DeVault, Annie Louis
Venue:
SIGDIAL
SIG:
SIGDIAL
Publisher:
Association for Computational Linguistics
Note:
Pages:
284–288
Language:
URL:
https://aclanthology.org/W17-5534
DOI:
10.18653/v1/W17-5534
Bibkey:
Cite (ACL):
Elia Bruni and Raquel Fernández. 2017. Adversarial evaluation for open-domain dialogue generation. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 284–288, Saarbrücken, Germany. Association for Computational Linguistics.
Cite (Informal):
Adversarial evaluation for open-domain dialogue generation (Bruni & Fernández, SIGDIAL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/W17-5534.pdf