Gonçalo Mordido

Also published as: Goncalo Mordido


2020

pdf
Best Student Forcing: A Simple Training Mechanism in Adversarial Language Generation
Jonathan Sauder | Ting Hu | Xiaoyin Che | Goncalo Mordido | Haojin Yang | Christoph Meinel
Proceedings of the Twelfth Language Resources and Evaluation Conference

Language models trained with Maximum Likelihood Estimation (MLE) have been considered as a mainstream solution in Natural Language Generation (NLG) for years. Recently, various approaches with Generative Adversarial Nets (GANs) have also been proposed. While offering exciting new prospects, GANs in NLG by far are nevertheless reportedly suffering from training instability and mode collapse, and therefore outperformed by conventional MLE models. In this work, we propose techniques for improving GANs in NLG, namely Best Student Forcing (BSF), a novel yet simple adversarial training mechanism in which generated sequences of high quality are selected as temporary ground-truth to further train the generator. We also use an ensemble of discriminators to increase training stability and sample diversity. Evaluation shows that the combination of BSF and multiple discriminators consistently performs better than previous GAN approaches over various metrics, and outperforms a baseline MLE in terms of Fr ́ech ́et Distance, a recently proposed metric capturing both sample quality and diversity.

pdf
Mark-Evaluate: Assessing Language Generation using Population Estimation Methods
Gonçalo Mordido | Christoph Meinel
Proceedings of the 28th International Conference on Computational Linguistics

We propose a family of metrics to assess language generation derived from population estimation methods widely used in ecology. More specifically, we use mark-recapture and maximum-likelihood methods that have been applied over the past several decades to estimate the size of closed populations in the wild. We propose three novel metrics: MEPetersen and MECAPTURE, which retrieve a single-valued assessment, and MESchnabel which returns a double-valued metric to assess the evaluation set in terms of quality and diversity, separately. In synthetic experiments, our family of methods is sensitive to drops in quality and diversity. Moreover, our methods show a higher correlation to human evaluation than existing metrics on several challenging tasks, namely unconditional language generation, machine translation, and text summarization.