Benchmarking Linguistic Diversity of Large Language Models

Yanzhu Guo, Guokan Shang, Chloé Clavel


Abstract
The development and evaluation of Large Language Models (LLMs) has primarily focused on their task-solving capabilities, with recent models even surpassing human performance in some areas. However, this focus often neglects whether machine-generated language matches the human level of diversity, in terms of vocabulary choice, syntactic construction, and expression of meaning, raising questions about whether the fundamentals of language generation have been fully addressed. This paper emphasizes the importance of examining the preservation of human linguistic richness by language models, given the concerning surge in online content produced or aided by LLMs. We adapt a comprehensive framework for evaluating LLMs from various linguistic diversity perspectives including lexical, syntactic, and semantic dimensions. Using this framework, we benchmark several state-of-the-art LLMs across all diversity dimensions, and conduct an in-depth analysis for syntactic diversity. Finally, we analyze how the design, development, and deployment choices of LLMs impact the linguistic diversity of their outputs, focusing on the creative task of story generation.
Anthology ID:
2025.tacl-1.69
Volume:
Transactions of the Association for Computational Linguistics, Volume 13
Month:
Year:
2025
Address:
Cambridge, MA
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
1507–1526
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2025.tacl-1.69/
DOI:
10.1162/tacl.a.47
Bibkey:
Cite (ACL):
Yanzhu Guo, Guokan Shang, and Chloé Clavel. 2025. Benchmarking Linguistic Diversity of Large Language Models. Transactions of the Association for Computational Linguistics, 13:1507–1526.
Cite (Informal):
Benchmarking Linguistic Diversity of Large Language Models (Guo et al., TACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2025.tacl-1.69.pdf