The Next Chapter: A Study of Large Language Models in Storytelling

Zhuohan Xie, Trevor Cohn, Jey Han Lau


Abstract
To enhance the quality of generated stories, recent story generation models have been investigating the utilization of higher-level attributes like plots or commonsense knowledge. The application of prompt-based learning with large language models (LLMs), exemplified by GPT-3, has exhibited remarkable performance in diverse natural language processing (NLP) tasks. This paper conducts a comprehensive investigation, utilizing both automatic and human evaluation, to compare the story generation capacity of LLMs with recent models across three datasets with variations in style, register, and length of stories. The results demonstrate that LLMs generate stories of significantly higher quality compared to other story generation models. Moreover, they exhibit a level of performance that competes with human authors, albeit with the preliminary observation that they tend to replicate real stories in situations involving world knowledge, resembling a form of plagiarism.
Anthology ID:
2023.inlg-main.23
Volume:
Proceedings of the 16th International Natural Language Generation Conference
Month:
September
Year:
2023
Address:
Prague, Czechia
Editors:
C. Maria Keet, Hung-Yi Lee, Sina Zarrieß
Venues:
INLG | SIGDIAL
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
323–351
Language:
URL:
https://aclanthology.org/2023.inlg-main.23
DOI:
10.18653/v1/2023.inlg-main.23
Bibkey:
Cite (ACL):
Zhuohan Xie, Trevor Cohn, and Jey Han Lau. 2023. The Next Chapter: A Study of Large Language Models in Storytelling. In Proceedings of the 16th International Natural Language Generation Conference, pages 323–351, Prague, Czechia. Association for Computational Linguistics.
Cite (Informal):
The Next Chapter: A Study of Large Language Models in Storytelling (Xie et al., INLG-SIGDIAL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2023.inlg-main.23.pdf
Supplementary attachment:
 2023.inlg-main.23.Supplementary_Attachment.pdf