Zhuohan Xie


2021

pdf bib
Exploring Story Generation with Multi-task Objectives in Variational Autoencoders
Zhuohan Xie | Jey Han Lau | Trevor Cohn
Proceedings of the The 19th Annual Workshop of the Australasian Language Technology Association

GPT-2 has been frequently adapted in story generation models as it provides powerful generative capability. However, it still fails to generate consistent stories and lacks diversity. Current story generation models leverage additional information such as plots or commonsense into GPT-2 to guide the generation process. These approaches focus on improving generation quality of stories while our work look at both quality and diversity. We explore combining BERT and GPT-2 to build a variational autoencoder (VAE), and extend it by adding additional objectives to learn global features such as story topic and discourse relations. Our evaluations show our enhanced VAE can provide better quality and diversity trade off, generate less repetitive story content and learn a more informative latent variable.

2019

pdf bib
From Shakespeare to Li-Bai: Adapting a Sonnet Model to Chinese Poetry
Zhuohan Xie | Jey Han Lau | Trevor Cohn
Proceedings of the The 17th Annual Workshop of the Australasian Language Technology Association

In this paper, we adapt Deep-speare, a joint neural network model for English sonnets, to Chinese poetry. We illustrate characteristics of Chinese quatrain and explain our architecture as well as training and generation procedure, which differs from Shakespeare sonnets in several aspects. We analyse the generated poetry and find that model works well for Chinese poetry, as it can: (1) generate coherent 4-line quatrains of different topics; and (2) capture rhyme automatically (to a certain extent).