CONTESTS: a Framework for Consistency Testing of Span Probabilities in Language Models

Eitan Wagner, Yuli Slavutsky, Omri Abend


Abstract
Although language model scores are often treated as probabilities, their reliability as probability estimators has mainly been studied through calibration, overlooking other aspects. In particular, it is unclear whether language models produce the same value for different ways of assigning joint probabilities to word spans. Our work introduces a novel framework, ConTestS (Consistency Testing over Spans), involving statistical tests to assess score consistency across interchangeable completion and conditioning orders. We conduct experiments on post-release real and synthetic data to eliminate training effects. Our findings reveal that both Masked Language Models (MLMs) and autoregressive models exhibit inconsistent predictions, with autoregressive models showing larger discrepancies. Larger MLMs tend to produce more consistent predictions, while autoregressive models show the opposite trend. Moreover, for both model types, prediction entropies offer insights into the true word span likelihood and therefore can aid in selecting optimal decoding strategies. The inconsistencies revealed by our analysis, as well their connection to prediction entropies and differences between model types, can serve as useful guides for future research on addressing these limitations.
Anthology ID:
2024.emnlp-main.866
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
15469–15484
Language:
URL:
https://preview.aclanthology.org/add-emnlp-2024-awards/2024.emnlp-main.866/
DOI:
10.18653/v1/2024.emnlp-main.866
Bibkey:
Cite (ACL):
Eitan Wagner, Yuli Slavutsky, and Omri Abend. 2024. CONTESTS: a Framework for Consistency Testing of Span Probabilities in Language Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 15469–15484, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
CONTESTS: a Framework for Consistency Testing of Span Probabilities in Language Models (Wagner et al., EMNLP 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/add-emnlp-2024-awards/2024.emnlp-main.866.pdf