Steven Wu


2025

pdf bib
Persona-Augmented Benchmarking: Evaluating LLMs Across Diverse Writing Styles
Kimberly Truong | Riccardo Fogliato | Hoda Heidari | Steven Wu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Current benchmarks for evaluating Large Language Models (LLMs) often do not exhibit enough writing style diversity, with many adhering primarily to standardized conventions. Such benchmarks do not fully capture the rich variety of communication patterns exhibited by humans. Thus, it is possible that LLMs, which are optimized on these benchmarks, may demonstrate brittle performance when faced with “non-standard” input. In this work, we test this hypothesis by rewriting evaluation prompts using persona-based LLM prompting, a low-cost method to emulate diverse writing styles. Our results show that, even with identical semantic content, variations in writing style and prompt formatting significantly impact the estimated performance of the LLM under evaluation. Notably, we identify distinct writing styles that consistently trigger either low or high performance across a range of models and tasks, irrespective of model family, size, or recency. Our work offers a scalable approach to augment existing benchmarks, improving the external validity of the assessments they provide for LLM performance across linguistic variations.

pdf bib
Predicting Language Models’ Success at Zero-Shot Probabilistic Prediction
Kevin Ren | Santiago Cortes-Gomez | Carlos Miguel Patiño | Ananya Joshi | Ruiqi Lyu | Jingjing Tang | Alistair Turcan | Khurram Yamin | Steven Wu | Bryan Wilder
Findings of the Association for Computational Linguistics: EMNLP 2025

Recent work has investigated the capabilities of large language models (LLMs) as zero-shot models for generating individual-level characteristics (e.g., to serve as risk models or augment survey datasets). However, when should a user have confidence that an LLM will provide high-quality predictions for their particular task? To address this question, we conduct a large-scale empirical study of LLMs’ zero-shot predictive capabilities across a wide range of tabular prediction tasks. We find that LLMs’ performance is highly variable, both on tasks within the same dataset and across different datasets. However, when the LLM performs well on the base prediction task, its predicted probabilities become a stronger signal for individual-level accuracy. Then, we construct metrics to predict LLMs’ performance at the task level, aiming to distinguish between tasks where LLMs may perform well and where they are likely unsuitable. We find that some of these metrics, each of which are assessed without labeled data, yield strong signals of LLMs’ predictive performance on new tasks.