A Hybrid Model for Globally Coherent Story Generation
Fangzhou Zhai, Vera Demberg, Pavel Shkadzko, Wei Shi, Asad Sayeed
Abstract
Automatically generating globally coherent stories is a challenging problem. Neural text generation models have been shown to perform well at generating fluent sentences from data, but they usually fail to keep track of the overall coherence of the story after a couple of sentences. Existing work that incorporates a text planning module succeeded in generating recipes and dialogues, but appears quite data-demanding. We propose a novel story generation approach that generates globally coherent stories from a fairly small corpus. The model exploits a symbolic text planning module to produce text plans, thus reducing the demand of data; a neural surface realization module then generates fluent text conditioned on the text plan. Human evaluation showed that our model outperforms various baselines by a wide margin and generates stories which are fluent as well as globally coherent.- Anthology ID:
- W19-3404
- Volume:
- Proceedings of the Second Workshop on Storytelling
- Month:
- August
- Year:
- 2019
- Address:
- Florence, Italy
- Editors:
- Francis Ferraro, Ting-Hao ‘Kenneth’ Huang, Stephanie M. Lukin, Margaret Mitchell
- Venue:
- Story-NLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 34–45
- Language:
- URL:
- https://aclanthology.org/W19-3404
- DOI:
- 10.18653/v1/W19-3404
- Cite (ACL):
- Fangzhou Zhai, Vera Demberg, Pavel Shkadzko, Wei Shi, and Asad Sayeed. 2019. A Hybrid Model for Globally Coherent Story Generation. In Proceedings of the Second Workshop on Storytelling, pages 34–45, Florence, Italy. Association for Computational Linguistics.
- Cite (Informal):
- A Hybrid Model for Globally Coherent Story Generation (Zhai et al., Story-NLP 2019)
- PDF:
- https://preview.aclanthology.org/ingest-2024-clasp/W19-3404.pdf