Trustworthiness of Children Stories Generated by Large Language Models

Prabin Bhandari, Hannah Brennan


Abstract
Large Language Models (LLMs) have shown a tremendous capacity for generating literary text. However, their effectiveness in generating children’s stories has yet to be thoroughly examined. In this study, we evaluate the trustworthiness of children’s stories generated by LLMs using various measures, and we compare and contrast our results with both old and new children’s stories to better assess their significance. Our findings suggest that LLMs still struggle to generate children’s stories at the level of quality and nuance found in actual stories.
Anthology ID:
2023.inlg-main.24
Volume:
Proceedings of the 16th International Natural Language Generation Conference
Month:
September
Year:
2023
Address:
Prague, Czechia
Editors:
C. Maria Keet, Hung-Yi Lee, Sina Zarrieß
Venues:
INLG | SIGDIAL
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
352–361
Language:
URL:
https://aclanthology.org/2023.inlg-main.24
DOI:
10.18653/v1/2023.inlg-main.24
Bibkey:
Cite (ACL):
Prabin Bhandari and Hannah Brennan. 2023. Trustworthiness of Children Stories Generated by Large Language Models. In Proceedings of the 16th International Natural Language Generation Conference, pages 352–361, Prague, Czechia. Association for Computational Linguistics.
Cite (Informal):
Trustworthiness of Children Stories Generated by Large Language Models (Bhandari & Brennan, INLG-SIGDIAL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2023.inlg-main.24.pdf