Distribution Prompting: Understanding the Expressivity of Language Models Through the Next-Token Distributions They Can Produce

Haojin Wang, Zining Zhu, Freda Shi


Abstract
Autoregressive neural language models (LMs) generate a probability distribution over tokens at each time step given a prompt. In this work, we attempt to systematically understand the probability distributions that LMs can produce, showing that some distributions are significantly harder to elicit than others. Specifically, for any target next-token distribution over the vocabulary, we attempt to find a prompt that induces the LM to output a distribution as close as possible to the target, using either soft or hard gradient-based prompt tuning. We find that (1) in general, distributions with very low or very high entropy are easier to approximate than those with moderate entropy; (2) among distributions with the same entropy, those containing ”outlier tokens” are easier to approximate; (3) target distributions generated by LMs – even LMs with different tokenizers – are easier to approximate than randomly chosen targets. These results offer insights into the expressiveness of LMs and the challenges of using them as probability distribution proposers.
Anthology ID:
2025.emnlp-main.1057
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20915–20928
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1057/
DOI:
Bibkey:
Cite (ACL):
Haojin Wang, Zining Zhu, and Freda Shi. 2025. Distribution Prompting: Understanding the Expressivity of Language Models Through the Next-Token Distributions They Can Produce. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 20915–20928, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Distribution Prompting: Understanding the Expressivity of Language Models Through the Next-Token Distributions They Can Produce (Wang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1057.pdf
Checklist:
 2025.emnlp-main.1057.checklist.pdf