Serial Position Effects of Large Language Models

Xiaobo Guo, Soroush Vosoughi


Abstract
We would like to express our gratitude to the Reviewers and the Area Chair for their insightful comments and for recognizing the robustness of our proposed framework for analyzing the serial position effects (SPE) in LLMs. We appreciate the acknowledgment of our work in demonstrating the widespread existence of this effect across various LLMs and the experiments we conducted to mitigate SPE.We acknowledge the concerns raised regarding the significance of the mitigation methods, including training-side solutions, CoT, and prompt engineering. The varying degrees of effectiveness observed in these methods highlight both the complexity and importance of addressing this cognitive bias. We believe these effects are inherently rooted in LLMs, and a comprehensive solution that fully addresses SPE may be beyond the scope of this work. However, we have proposed practical strategies, such as using binary choices instead of multiple choices where feasible, limiting prompt length, and placing crucial information at the beginning of prompts. These suggestions are intended to help users, particularly those who may not be experts in the domain of LLMs, to better utilize these models.We agree with the suggestion that a deeper analysis of the relationship between task characteristics and SPE could enhance the manuscript. As it stands, our findings indicate that higher model accuracy tends to correlate with a reduction in SPE, which aligns with expectations—if a model achieves 100% accuracy, it is unlikely to be influenced by SPE. Beyond this, we did not observe any clear relationships, which suggests that SPE may be influenced by a combination of factors, including the specific task, the model used, and the nature of the prompts. We will clarify this point in the final version of the manuscript.
Anthology ID:
2025.findings-acl.52
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
927–953
Language:
URL:
https://preview.aclanthology.org/transition-to-people-yaml/2025.findings-acl.52/
DOI:
10.18653/v1/2025.findings-acl.52
Bibkey:
Cite (ACL):
Xiaobo Guo and Soroush Vosoughi. 2025. Serial Position Effects of Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2025, pages 927–953, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Serial Position Effects of Large Language Models (Guo & Vosoughi, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/transition-to-people-yaml/2025.findings-acl.52.pdf