Testing Language Creativity of Large Language Models and Humans

Anca Dinu, Andra-Maria Florescu


Abstract
Since the advent of Large Language Models (LLMs), the interest and need for a better understanding of artificial creativity has increased.This paper aims to design and administer an integrated language creativity test, including multiple tasks and criteria, targeting both LLMs and humans, for a direct comparison. Language creativity refers to how one uses natural language in novel and unusual ways, by bending lexico-grammatical and semantic norms by using literary devices or by creating new words. The results show a slightly better performance of LLMs compared to humans. We analyzed the responses dataset with computational methods like sentiment analysis, clusterization, and binary classification, for a more in-depth understanding. Also, we manually inspected a part of the answers, which revealed that the LLMs mastered figurative speech, while humans responded more pragmatically.
Anthology ID:
2025.nlp4dh-1.37
Volume:
Proceedings of the 5th International Conference on Natural Language Processing for Digital Humanities
Month:
May
Year:
2025
Address:
Albuquerque, USA
Editors:
Mika Hämäläinen, Emily Öhman, Yuri Bizzoni, So Miyagawa, Khalid Alnajjar
Venues:
NLP4DH | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
426–436
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.nlp4dh-1.37/
DOI:
Bibkey:
Cite (ACL):
Anca Dinu and Andra-Maria Florescu. 2025. Testing Language Creativity of Large Language Models and Humans. In Proceedings of the 5th International Conference on Natural Language Processing for Digital Humanities, pages 426–436, Albuquerque, USA. Association for Computational Linguistics.
Cite (Informal):
Testing Language Creativity of Large Language Models and Humans (Dinu & Florescu, NLP4DH 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.nlp4dh-1.37.pdf