Berk Atıl
Also published as: Berk Atil
2025
Non-Determinism of “Deterministic” LLM System Settings in Hosted Environments
Berk Atıl
|
Sarp Aykent
|
Alexa Chittams
|
Lisheng Fu
|
Rebecca J. Passonneau
|
Evan Radcliffe
|
Guru Rajan Rajagopal
|
Adam Sloan
|
Tomasz Tudrej
|
Ferhan Ture
|
Zhe Wu
|
Lixinyu Xu
|
Breck Baldwin
Proceedings of the 5th Workshop on Evaluation and Comparison of NLP Systems
LLM (large language model) users of hosted providers commonly notice that outputs can vary for the same inputs under settings expected to be deterministic. While it is difficult to get exact statistics, recent reports on specialty news sites and discussion boards suggest that among users in all communities, the majority of LLM usage today is through cloud-based APIs. Yet the questions of how pervasive non- determinism is, and how much it affects perfor- mance results, have not to our knowledge been systematically investigated. We apply five API- based LLMs configured to be deterministic to eight diverse tasks across 10 runs. Experiments reveal accuracy variations of up to 15% across runs, with a gap of up to 70% between best pos- sible performance and worst possible perfor- mance. No LLM consistently delivers the same outputs or accuracies, regardless of task. We speculate about the sources of non-determinism such as input buffer packing across multiple jobs. To better quantify our observations, we introduce metrics focused on quantifying de- terminism, TARr@N for the total agreement rate at N runs over raw output, and TARa@N for total agreement rate of parsed-out answers. Our code and data will be publicly available at https://github.com/Anonymous.
Can LLMs Rank the Harmfulness of Smaller LLMs? We are Not There Yet
Berk Atil
|
Vipul Gupta
|
Sarkar Snigdha Sarathi Das
|
Rebecca Passonneau
Proceedings of the The 9th Workshop on Online Abuse and Harms (WOAH)
Large language models (LLMs) have become ubiquitous, thus it is important to understand their risks and limitations, such as their propensity to generate harmful output. This includes smaller LLMs, which are important for settings with constrained compute resources, such as edge devices. Detection of LLM harm typically requires human annotation, which is expensive to collect. This work studies two questions: How do smaller LLMs rank regarding generation of harmful content? How well can larger LLMs annotate harmfulness? We prompt three small LLMs to elicit harmful content of various types, such as discriminatory language, offensive content, privacy invasion, or negative influence, and collect human rankings of their outputs. Then, we compare harm annotation from three state-of-the-art large LLMs with each other and with humans. We find that the smaller models differ with respect to harmfulness. We also find that large LLMs show low to moderate agreement with humans.