Measuring the Inconsistency of Large Language Models in Preferential Ranking

Xiutian Zhao, Ke Wang, Wei Peng


Abstract
Despite large language models’ (LLMs’) recent advancements, their bias and hallucination issues persist, and their ability to offer consistent and preferential rankings remains underexplored. This study investigates the capacity of LLMs to provide consistent ordinal preferences, a crucial aspect in scenarios lacking absolute answers. We introduce a formalization of consistency based on order theory, outlining criteria such as transitivity, asymmetry, reversibility, and independence from irrelevant alternatives. Our diagnostic experiments on selected state-of-the-art LLMs reveal their inability to meet these criteria, indicating a strong positional bias and poor transitivity, with preferences easily swayed by irrelevant alternatives. These findings highlight a significant inconsistency in LLM-generated preferential rankings, underscoring the need for further research to address these limitations.
Anthology ID:
2024.knowllm-1.14
Volume:
Proceedings of the 1st Workshop on Towards Knowledgeable Language Models (KnowLLM 2024)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Sha Li, Manling Li, Michael JQ Zhang, Eunsol Choi, Mor Geva, Peter Hase, Heng Ji
Venues:
KnowLLM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
171–176
Language:
URL:
https://aclanthology.org/2024.knowllm-1.14
DOI:
Bibkey:
Cite (ACL):
Xiutian Zhao, Ke Wang, and Wei Peng. 2024. Measuring the Inconsistency of Large Language Models in Preferential Ranking. In Proceedings of the 1st Workshop on Towards Knowledgeable Language Models (KnowLLM 2024), pages 171–176, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Measuring the Inconsistency of Large Language Models in Preferential Ranking (Zhao et al., KnowLLM-WS 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2024.knowllm-1.14.pdf