Persona-Augmented Benchmarking: Evaluating LLMs Across Diverse Writing Styles

Kimberly Truong, Riccardo Fogliato, Hoda Heidari, Steven Wu


Abstract
Current benchmarks for evaluating Large Language Models (LLMs) often do not exhibit enough writing style diversity, with many adhering primarily to standardized conventions. Such benchmarks do not fully capture the rich variety of communication patterns exhibited by humans. Thus, it is possible that LLMs, which are optimized on these benchmarks, may demonstrate brittle performance when faced with “non-standard” input. In this work, we test this hypothesis by rewriting evaluation prompts using persona-based LLM prompting, a low-cost method to emulate diverse writing styles. Our results show that, even with identical semantic content, variations in writing style and prompt formatting significantly impact the estimated performance of the LLM under evaluation. Notably, we identify distinct writing styles that consistently trigger either low or high performance across a range of models and tasks, irrespective of model family, size, or recency. Our work offers a scalable approach to augment existing benchmarks, improving the external validity of the assessments they provide for LLM performance across linguistic variations.
Anthology ID:
2025.emnlp-main.1155
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22687–22720
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1155/
DOI:
Bibkey:
Cite (ACL):
Kimberly Truong, Riccardo Fogliato, Hoda Heidari, and Steven Wu. 2025. Persona-Augmented Benchmarking: Evaluating LLMs Across Diverse Writing Styles. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 22687–22720, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Persona-Augmented Benchmarking: Evaluating LLMs Across Diverse Writing Styles (Truong et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1155.pdf
Checklist:
 2025.emnlp-main.1155.checklist.pdf