Do Massively Pretrained Language Models Make Better Storytellers?

Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, Christopher D. Manning


Abstract
Large neural language models trained on massive amounts of text have emerged as a formidable strategy for Natural Language Understanding tasks. However, the strength of these models as Natural Language Generators is less clear. Though anecdotal evidence suggests that these models generate better quality text, there has been no detailed study characterizing their generation abilities. In this work, we compare the performance of an extensively pretrained model, OpenAI GPT2-117 (Radford et al., 2019), to a state-of-the-art neural story generation model (Fan et al., 2018). By evaluating the generated text across a wide variety of automatic metrics, we characterize the ways in which pretrained models do, and do not, make better storytellers. We find that although GPT2-117 conditions more strongly on context, is more sensitive to ordering of events, and uses more unusual words, it is just as likely to produce repetitive and under-diverse text when using likelihood-maximizing decoding algorithms.
Anthology ID:
K19-1079
Volume:
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
Month:
November
Year:
2019
Address:
Hong Kong, China
Editors:
Mohit Bansal, Aline Villavicencio
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
843–861
Language:
URL:
https://aclanthology.org/K19-1079
DOI:
10.18653/v1/K19-1079
Bibkey:
Cite (ACL):
Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, and Christopher D. Manning. 2019. Do Massively Pretrained Language Models Make Better Storytellers?. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 843–861, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Do Massively Pretrained Language Models Make Better Storytellers? (See et al., CoNLL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/K19-1079.pdf
Code
 abisee/story-generation-eval
Data
WebTextWritingPrompts