Tomasz Tudrej
2025
Non-Determinism of “Deterministic” LLM System Settings in Hosted Environments
Berk Atıl
|
Sarp Aykent
|
Alexa Chittams
|
Lisheng Fu
|
Rebecca J. Passonneau
|
Evan Radcliffe
|
Guru Rajan Rajagopal
|
Adam Sloan
|
Tomasz Tudrej
|
Ferhan Ture
|
Zhe Wu
|
Lixinyu Xu
|
Breck Baldwin
Proceedings of the 5th Workshop on Evaluation and Comparison of NLP Systems
LLM (large language model) users of hosted providers commonly notice that outputs can vary for the same inputs under settings expected to be deterministic. While it is difficult to get exact statistics, recent reports on specialty news sites and discussion boards suggest that among users in all communities, the majority of LLM usage today is through cloud-based APIs. Yet the questions of how pervasive non- determinism is, and how much it affects perfor- mance results, have not to our knowledge been systematically investigated. We apply five API- based LLMs configured to be deterministic to eight diverse tasks across 10 runs. Experiments reveal accuracy variations of up to 15% across runs, with a gap of up to 70% between best pos- sible performance and worst possible perfor- mance. No LLM consistently delivers the same outputs or accuracies, regardless of task. We speculate about the sources of non-determinism such as input buffer packing across multiple jobs. To better quantify our observations, we introduce metrics focused on quantifying de- terminism, TARr@N for the total agreement rate at N runs over raw output, and TARa@N for total agreement rate of parsed-out answers. Our code and data will be publicly available at https://github.com/Anonymous.
Search
Fix author
Co-authors
- Berk Atıl 1
- Sarp Aykent 1
- Breck Baldwin 1
- Alexa Chittams 1
- Lisheng Fu 1
- show all...